Skip to content

When Not to Use AI

Patterns beginner 8 min
Sources verified Dec 23

A decision framework for identifying when manual coding is safer, faster, or more appropriate than AI assistance.

AI coding tools are powerful, but they're not always the right choice. Knowing when NOT to use AI is as important as knowing how to use it well. This framework helps you make that judgment call.

The Non-Use Decision Tree

                    ┌─────────────────────────┐
                    │    Is this code...      │
                    └───────────┬─────────────┘
                                │
        ┌───────────────────────┼───────────────────────┐
        ▼                       ▼                       ▼
┌───────────────┐    ┌───────────────────┐    ┌─────────────────┐
│  Security-    │    │  Novel/Complex    │    │  Well-Defined   │
│  Critical?    │    │  Problem?         │    │  Pattern?       │
└───────┬───────┘    └─────────┬─────────┘    └────────┬────────┘
        │                      │                       │
        ▼                      ▼                       ▼
   ┌─────────┐           ┌──────────┐           ┌───────────┐
   │ YES:    │           │ YES:     │           │ YES:      │
   │ Manual  │           │ Manual   │           │ Use AI    │
   │ + Review│           │ First    │           │ + Review  │
   └─────────┘           └──────────┘           └───────────┘

Category 1: Security-Critical Code

Always write manually (or use AI with extreme scrutiny):

Context Why AI Fails What To Do
Authentication/Authorization AI often uses insecure defaults Manual + security review
Cryptographic operations Subtle errors are catastrophic Use audited libraries manually
Input validation AI misses edge cases Manual + fuzzing
Financial calculations Precision errors compound Manual + property-based tests
HIPAA/PCI/SOC2 compliance AI doesn't understand legal context Human ownership required

Category 2: Novel or Complex Problems

AI struggles when:

  • The problem requires deep understanding of your specific domain
  • There's no clear pattern in training data
  • The solution requires multi-step reasoning across files
  • Business logic is subtle or exception-heavy

The Test: Can you explain the solution clearly in a prompt?

  • If YES → AI can help
  • If NO → You need to understand it first

Category 3: Learning & Skill Development

Don't use AI when:

  • You're learning a new language/framework (build muscle memory first)
  • The task builds foundational skills you need
  • You're a junior developer who hasn't internalized patterns
  • Interview prep (you need to do it yourself)

The Stanford Finding: Ages 22-25 in AI-exposed roles saw 13% relative decline in employment. AI can accelerate experts but may hinder skill development.

The Quick Decision Matrix

Scenario AI Appropriate? Why
Boilerplate CRUD endpoints ✅ Yes Well-defined pattern
JWT authentication from scratch ❌ No Security-critical
Regex for email validation ⚠️ Careful Test thoroughly
Algorithm you've never implemented ❌ No Learn it first
Migrating code between frameworks ✅ Yes Pattern matching
Writing unit tests for your code ✅ Yes Well-defined task
Debugging production issue ⚠️ Careful AI lacks runtime context
SQL with user input ❌ No Injection risk

When AI Is the Wrong Tool (Even If It Could Help)

  1. Time pressure with no review time — Fast AI output + no review = bugs shipped
  2. Unfamiliar codebase — AI can't tell you if code matches existing patterns
  3. Compliance documentation required — You need to prove human judgment
  4. Team skill assessment — Using AI hides gaps that need addressing

The Managed Use Alternative

Complete non-use is often too extreme. Consider managed use instead:

Instead of... Try...
Avoiding AI for security code AI generates, then security-tuned AI reviews
Never using AI for auth AI handles boilerplate, you handle logic
Manual-only for compliance AI drafts, human documents decisions

The goal isn't AI avoidance—it's appropriate trust calibration.

Key Takeaways

  • Security-critical code requires manual writing or extreme AI scrutiny
  • Novel problems need human understanding before AI can help
  • Learning scenarios often benefit from AI-free practice
  • Use the 'Would I Trust a Junior?' test for AI appropriateness
  • 45% AI code vulnerability rate means verification cost can exceed generation benefit
  • Managed use (AI + review) often beats complete avoidance
  • Time pressure without review time = don't use AI

In This Platform

This platform applies non-use judgment: we use build-time validation (no runtime AI), static compilation (no generation on request), and human-authored content (AI assists research but humans write). These choices optimize for reliability and source-backed claims.

Relevant Files:
  • build.js
  • CLAUDE.md

Prerequisites

Sources

Tempered AI Forged Through Practice, Not Hype

Keyboard Shortcuts

j
Next page
k
Previous page
h
Section home
/
Search
?
Show shortcuts
m
Toggle sidebar
Esc
Close modal
Shift+R
Reset all progress
? Keyboard shortcuts