The Problem
You're making the same prompting mistakes repeatedly. You read articles, try techniques, but your prompts still produce inconsistent results.
Here's the thing: prompting failures are patterned. The same mistakes appear across developers, projects, and models. Once you recognize them, you can stop making them.
This is a catalog of the seven most damaging anti-patterns, why they fail, and how to fix them.
Anti-Pattern 1: The Pleaser
What it looks like:
Why it fails:
- Token waste: "Please," "kindly," "I would really appreciate" consume tokens without improving output
- Signal dilution: Actual instructions get buried in politeness
- No clarity gain: The model doesn't respond better to politeness—it responds to clarity
The fix:
Token savings: ~40 tokens → ~10 tokens. Same output quality.
When politeness IS appropriate: User-facing applications where the model's response will be seen by humans, or when establishing a conversational tone matters for the task.
Anti-Pattern 2: The Novelist
What it looks like:
Why it fails:
- Buried lede: The actual question is 150 words deep
- Irrelevant context: Flask, React, PostgreSQL don't matter for a memory issue
- Narrative instead of structure: Model has to extract requirements from prose
The fix:
Anti-Pattern 3: The Micromanager
What it looks like:
Why it fails:
- Constrains better solutions: Model can't use regex, existing libraries, or better algorithms
- Encodes your assumptions: Your step-by-step might have bugs or inefficiencies
- Defeats the point: If you're specifying every line, why use AI?
The fix:
When micromanagement IS appropriate: Teaching scenarios, exact compatibility with existing code style, or regulatory/compliance requirements.
Anti-Pattern 4: The Optimist
What it looks like:
or
or
Why it fails:
- Assumes shared context: Model doesn't have your codebase, error messages, or requirements
- No information to act on: "The bug" could be anything; "new requirements" aren't specified
- Guaranteed follow-up loop: Model will ask for clarification, wasting a round trip
The fix:
Anti-Pattern 5: The Copy-Paster
What it looks like:
Why it fails:
- Generic instructions add nothing: "Be thorough and accurate" - were you planning to be sloppy?
- Wrong persona for task: "Helpful AI assistant" is generic; "Senior React developer" is specific
- Template cruft: Instructions designed for general chat, not specific technical tasks
The fix:
Anti-Pattern 6: The Threatener
What it looks like:
Why it fails:
- Doesn't improve accuracy: Fear doesn't make models better at parsing JSON
- May cause over-caution: Model might add unnecessary checks, hedging, disclaimers
- Wastes tokens: Entire paragraph of threat adds no useful instruction
The fix:
When stakes-language IS useful: Clarifying priorities ("Correctness over performance"), signaling validation requirements ("This will handle financial data—include input validation"). Not as threats, but as context.
Anti-Pattern 7: The Role-Player Gone Wrong
What it looks like:
or
Why it fails:
- Persona conflicts with task: Medieval blacksmith knowledge doesn't help with Python
- Unrealistic roles cause strain: "Never made a mistake" creates cognitive dissonance
- Novelty doesn't improve output: Creative personas for technical tasks add noise
The fix:
or for legitimate persona needs:
Persona test: Does this persona have knowledge/perspective that improves the task? If no, drop it.
Anti-Pattern Interactions
These anti-patterns often combine:
- The Pleasing Novelist: "I hope you don't mind me asking, but I've been working on this project and I really need some help..." [300 more words]
- The Micromanaging Threatener: "CRITICAL: Do exactly as I say or this will fail! Step 1: Create a variable called x..."
- The Optimistic Copy-Paster: [Generic template from tutorial] + "Fix the thing that's broken."
Recognizing combinations helps diagnose prompts that fail in multiple ways.
Failure Recovery
When a prompt produces bad output, diagnose by anti-pattern:
| Symptom | Likely Anti-Pattern | Fix |
|---|---|---|
| Model asks for clarification | Optimist | Add missing context |
| Response is overly verbose | Novelist | Trim your prompt, specify format |
| Wrong approach to solution | Micromanager | Specify outcomes, not steps |
| Generic/unhelpful response | Copy-Paster | Customize for your task |
| Defensive/hedging response | Threatener | Remove stakes language |
| Weird persona leakage | Role-Player | Match persona to task |
| Long prompt, mediocre results | Pleaser + Novelist | Radical trim |
Quick Reference
Before Sending Checklist
- Can I remove politeness without losing meaning? (Pleaser)
- Is this prompt under 100 words for simple tasks? (Novelist)
- Am I specifying WHAT not HOW? (Micromanager)
- Does the model have all context needed? (Optimist)
- Did I customize this template for my task? (Copy-Paster)
- Is there unnecessary pressure/threat language? (Threatener)
- Does my persona actually help with this task? (Role-Player)
Token Budget Guide
| Task Complexity | Prompt Length | Warning Sign |
|---|---|---|
| Simple (explain, translate) | 10-30 tokens | Over 50 = trim |
| Medium (review, refactor) | 50-150 tokens | Over 200 = trim |
| Complex (architect, debug) | 150-400 tokens | Over 500 = restructure |
If your prompt is longer than the expected output, reconsider.
The Universal Fix
For any underperforming prompt:
- Identify the task verb (review, write, explain, fix)
- List essential context (code, error, constraints)
- Specify output format (if not obvious)
- Delete everything else
- Add back only what improves output
Most prompts improve by subtraction.
The Meta-Lesson
These anti-patterns share a root cause: treating the model like a human colleague.
- Humans appreciate politeness → Models process tokens
- Humans need narrative context → Models need structured context
- Humans can be motivated by stakes → Models compute probabilities
- Humans share your background knowledge → Models only know what you provide
The model is a function. Input determines output. Noise in the input creates noise in the output.
Prompt engineering is noise reduction.