Common Prompting Anti-Patterns

Module 00: Prompt Engineering Basics | Expansion Guide

Back to Module 00

The Problem

You're making the same prompting mistakes repeatedly. You read articles, try techniques, but your prompts still produce inconsistent results.

Here's the thing: prompting failures are patterned. The same mistakes appear across developers, projects, and models. Once you recognize them, you can stop making them.

This is a catalog of the seven most damaging anti-patterns, why they fail, and how to fix them.

Anti-Pattern 1: The Pleaser

What it looks like:

Please, if you could kindly help me with this code, I would really appreciate it if you would review it for any potential issues you might find. Thank you so much in advance for your assistance!

Why it fails:

  • Token waste: "Please," "kindly," "I would really appreciate" consume tokens without improving output
  • Signal dilution: Actual instructions get buried in politeness
  • No clarity gain: The model doesn't respond better to politeness—it responds to clarity

The fix:

Review this code for bugs and security issues. [code]

Token savings: ~40 tokens → ~10 tokens. Same output quality.

When politeness IS appropriate: User-facing applications where the model's response will be seen by humans, or when establishing a conversational tone matters for the task.

Anti-Pattern 2: The Novelist

What it looks like:

I've been working on this project for several weeks now, and I've encountered a particularly challenging issue that I'm hoping you can help me solve. The project involves building a web application that allows users to upload images and then process them using various filters. The backend is written in Python using the Flask framework, and the frontend uses React. The database is PostgreSQL. The specific issue I'm facing is related to the image upload functionality. When users try to upload large images, the application crashes. I've tried several things including increasing the memory allocation and adjusting timeout settings, but nothing seems to work. Here's the code for the upload function: [code] Can you help me figure out what's going wrong?

Why it fails:

  • Buried lede: The actual question is 150 words deep
  • Irrelevant context: Flask, React, PostgreSQL don't matter for a memory issue
  • Narrative instead of structure: Model has to extract requirements from prose

The fix:

Flask app crashes on large image uploads. Memory/timeout adjustments didn't help. [code] Why does this crash on large files?

Anti-Pattern 3: The Micromanager

What it looks like:

Write a function called validate_email that takes one parameter named email_string. First, check if the string contains exactly one @ symbol using the count() method. Then split the string by @ using split(). Store the part before @ in a variable called local_part and the part after @ in a variable called domain. Check that local_part is not empty using len(). Check that domain contains at least one period using 'in'. Return True if all checks pass, False otherwise.

Why it fails:

  • Constrains better solutions: Model can't use regex, existing libraries, or better algorithms
  • Encodes your assumptions: Your step-by-step might have bugs or inefficiencies
  • Defeats the point: If you're specifying every line, why use AI?

The fix:

Write a Python function to validate email addresses. Requirements: - Return True/False - Handle common edge cases (empty string, multiple @, etc.) - Don't use external libraries

When micromanagement IS appropriate: Teaching scenarios, exact compatibility with existing code style, or regulatory/compliance requirements.

Anti-Pattern 4: The Optimist

What it looks like:

Fix the bug in the authentication system.

or

Why isn't my code working?

or

Update the database schema to match the new requirements.

Why it fails:

  • Assumes shared context: Model doesn't have your codebase, error messages, or requirements
  • No information to act on: "The bug" could be anything; "new requirements" aren't specified
  • Guaranteed follow-up loop: Model will ask for clarification, wasting a round trip

The fix:

This authentication function returns None instead of a user object when valid credentials are provided. [function code] [error message or unexpected behavior] [expected behavior] What's causing this?

Anti-Pattern 5: The Copy-Paster

What it looks like:

[From some AI prompting guide] You are a helpful AI assistant. You will help the user with their request. Be thorough and accurate. If you don't know something, say so. User: Review my React component for performance issues. [component code]

Why it fails:

  • Generic instructions add nothing: "Be thorough and accurate" - were you planning to be sloppy?
  • Wrong persona for task: "Helpful AI assistant" is generic; "Senior React developer" is specific
  • Template cruft: Instructions designed for general chat, not specific technical tasks

The fix:

You're reviewing React code for performance issues. Focus on: - Unnecessary re-renders - Missing memoization opportunities - Expensive computations in render path [component code]

Anti-Pattern 6: The Threatener

What it looks like:

IMPORTANT: If you make any mistakes, the entire production system will crash and users will lose data. Be EXTREMELY careful. Double-check everything. Do not make errors. This is critical. Write a function to parse JSON.

Why it fails:

  • Doesn't improve accuracy: Fear doesn't make models better at parsing JSON
  • May cause over-caution: Model might add unnecessary checks, hedging, disclaimers
  • Wastes tokens: Entire paragraph of threat adds no useful instruction

The fix:

Write a function to parse JSON. Requirements: - Handle malformed input gracefully (return None, don't crash) - Include type hints - Add error logging

When stakes-language IS useful: Clarifying priorities ("Correctness over performance"), signaling validation requirements ("This will handle financial data—include input validation"). Not as threats, but as context.

Anti-Pattern 7: The Role-Player Gone Wrong

What it looks like:

You are a medieval blacksmith who has been transported to the modern age and is now learning to code. You speak in Old English and are constantly amazed by technology. Write a Python function to sort a list.

or

You are the world's greatest programmer who has never made a mistake. You know every language perfectly. Write me a simple hello world.

Why it fails:

  • Persona conflicts with task: Medieval blacksmith knowledge doesn't help with Python
  • Unrealistic roles cause strain: "Never made a mistake" creates cognitive dissonance
  • Novelty doesn't improve output: Creative personas for technical tasks add noise

The fix:

Write a Python function to sort a list. [Just ask directly for simple tasks]

or for legitimate persona needs:

You're a senior Python developer writing code for a junior developer to learn from. Include comments explaining non-obvious choices. Write a function to sort a list with O(n log n) worst-case performance.

Persona test: Does this persona have knowledge/perspective that improves the task? If no, drop it.

Anti-Pattern Interactions

These anti-patterns often combine:

Recognizing combinations helps diagnose prompts that fail in multiple ways.

Failure Recovery

When a prompt produces bad output, diagnose by anti-pattern:

Symptom Likely Anti-Pattern Fix
Model asks for clarification Optimist Add missing context
Response is overly verbose Novelist Trim your prompt, specify format
Wrong approach to solution Micromanager Specify outcomes, not steps
Generic/unhelpful response Copy-Paster Customize for your task
Defensive/hedging response Threatener Remove stakes language
Weird persona leakage Role-Player Match persona to task
Long prompt, mediocre results Pleaser + Novelist Radical trim

Quick Reference

Before Sending Checklist

  • Can I remove politeness without losing meaning? (Pleaser)
  • Is this prompt under 100 words for simple tasks? (Novelist)
  • Am I specifying WHAT not HOW? (Micromanager)
  • Does the model have all context needed? (Optimist)
  • Did I customize this template for my task? (Copy-Paster)
  • Is there unnecessary pressure/threat language? (Threatener)
  • Does my persona actually help with this task? (Role-Player)

Token Budget Guide

Task Complexity Prompt Length Warning Sign
Simple (explain, translate) 10-30 tokens Over 50 = trim
Medium (review, refactor) 50-150 tokens Over 200 = trim
Complex (architect, debug) 150-400 tokens Over 500 = restructure

If your prompt is longer than the expected output, reconsider.

The Universal Fix

For any underperforming prompt:

  1. Identify the task verb (review, write, explain, fix)
  2. List essential context (code, error, constraints)
  3. Specify output format (if not obvious)
  4. Delete everything else
  5. Add back only what improves output

Most prompts improve by subtraction.

The Meta-Lesson

These anti-patterns share a root cause: treating the model like a human colleague.

The model is a function. Input determines output. Noise in the input creates noise in the output.

Prompt engineering is noise reduction.

Written with: [phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε⚡φ Σ⚡μ c⚡h] | OODA