The Anatomy of a Good Prompt

Module 00: Prompt Engineering Basics | Expansion Guide

Back to Module 00

The Problem

Your prompts are inconsistent. Sometimes brilliant results, sometimes garbage. You've read the guides, tried the techniques, but you can't reliably reproduce quality outputs.

The issue isn't randomness. It's structure. Good prompts share an anatomy. Bad prompts are missing pieces or have them in the wrong order.

Once you see the skeleton, you can build reliable prompts for anything.

The Core Insight

Every effective prompt has five components. Not all are required for every task, but knowing which to include—and which to omit—is the skill.

The Five Components: Context, Task, Format, Constraints, Examples

Component Purpose Required?
Context What the model needs to know Usually
Task What you want done Always
Format How to structure output Often
Constraints What to avoid or limit Sometimes
Examples Show, don't just tell For complex tasks

Order matters. Context before Task. Task before Format. Constraints throughout.

The Walkthrough

Component 1: Context

Context answers: "What does the model need to know to do this well?"

Types of context:

  1. Domain context - "You are reviewing Python code for a fintech application"
  2. Situational context - "The user is a beginner who just learned loops"
  3. Reference context - "Given this function: [code]"
  4. Historical context - "Previously, we decided to use PostgreSQL"

Context calibration:

Too little: "Write a function"
→ Model guesses language, purpose, style

Too much: "Write a Python 3.11 function using type hints following 
          PEP 8 style guide for a Django 4.2 application running on 
          AWS Lambda with 128MB memory..."
→ Overwhelms with irrelevant constraints

Right amount: "Write a Python function for a REST API. Use type hints."
→ Enough to guide, not enough to constrain unnecessarily

The context test: If removing a piece of context wouldn't change the output, you don't need it.

Component 2: Task

Task answers: "What action should the model take?"

Effective task statements:

Vague: "Help with this code"
→ Model doesn't know if you want review, explanation, fix, or rewrite

Specific: "Refactor this function to reduce cyclomatic complexity below 10"
→ Clear action, measurable outcome

Overly specific: "Refactor line 14 to use a ternary operator and line 
                 23 to use list comprehension and..."
→ Micromanagement that may produce worse solutions

The task test: Could someone verify the task was completed? If "help with code" - how do you know when you're done?

Component 3: Format

Format answers: "How should the output be structured?"

Output as JSON:
{
  "issues": [...],
  "suggestions": [...],
  "severity": "high|medium|low"
}

Output as markdown:
## Summary
[one paragraph]

## Detailed Findings
[bullet points]

Format reduces post-processing. If you need JSON, ask for JSON. Don't ask for text and parse it yourself.

The format test: Will you need to transform the output before using it? If yes, specify a better format.

Component 4: Constraints

Constraints answer: "What should the model NOT do?"

Types of constraints:

  1. Exclusions - "Don't use external libraries"
  2. Limits - "Maximum 20 lines"
  3. Prohibitions - "Never use eval()"
  4. Style - "No passive voice"

Constraint placement: High-priority constraints go early:

# High priority - put first
IMPORTANT: Never log sensitive data.

# Medium priority - in context
...following the existing code style...

# Low priority - at end
Keep response under 500 words if possible.

The constraint test: Does removing this constraint risk output you can't use? If no, it's optional. If yes, it's essential.

Component 5: Examples

Examples answer: "What does good output look like?"

When to use examples:

Input: def add(a, b): return a + b
Output: Pure function, no side effects, O(1) complexity

Input: def get_user(id): return db.query(...)
Output: Database dependency, potential SQL injection, O(n) for table scan

Input: [YOUR CODE HERE]
Output:

Example quality matters more than quantity. Two good examples beat five mediocre ones.

The examples test: Would a junior developer understand what to do from the examples alone? If not, add clarifying text.

Minimum Viable Prompts

Simple tasks (1-2 components)

Explain list comprehensions.
[Task only - context implicit from simplicity]

Given x = [1,2,3], write code to double each element.
[Reference context + Task]

Standard tasks (3-4 components)

You're reviewing Python code for security issues.
Analyze this function: [code]
List issues as bullet points with severity ratings.
[Context + Task + Format]

Complex tasks (all 5)

You are a senior engineer reviewing code for a payment system.
The codebase uses Django and PostgreSQL.

Analyze this function for security vulnerabilities: [code]

Format your response as:
- Severity (critical/high/medium/low)
- Issue description
- Recommended fix
- Example of fixed code

Focus only on security. Don't mention style or performance.

Example:
Input: cursor.execute(f"SELECT * FROM users WHERE id={id}")
Output:
- Severity: Critical
- Issue: SQL injection via string interpolation
- Fix: Use parameterized queries
- Example: cursor.execute("SELECT * FROM users WHERE id=%s", [id])

Now analyze: [actual code]

Prompt Surgery: Fixing Weak Prompts

Case 1: The Vague Prompt

Before: Help me with this function.

Diagnosis: No task, no context, no format.

After: Review this Python function for bugs and edge cases. List each issue with a one-line fix. [function code]

Case 2: The Kitchen Sink

Before: You are an expert Python developer with 20 years of experience working at FAANG companies on distributed systems and you specialize in performance optimization and code review and you follow all best practices...

Diagnosis: Overloaded context, conflicting constraints, no clear task.

After: Review this code for performance issues. Be direct. Focus on the top 3 improvements. [code]

Case 3: The Micromanager

Before: Write a function. Name it process_data. It should take a parameter called data which is a list. Loop through the list using a for loop, not a while loop...

Diagnosis: Task over-specified, constrains good solutions.

After: Write a function that filters a list to items greater than 10. Requirements: Return a new list (don't modify input), Handle empty input.

Failure Patterns

1. Kitchen Sink Prompts

Problem: You include every constraint you can think of.

Effect: Model attention splits across too many requirements. Core task suffers.

Fix: Prioritize. What are the 3 constraints that matter most? Start there. Add only if output is wrong.

2. Vague Task Specification

Problem: "Help me with..." or "Look at..." without specific action.

Effect: Model guesses your intent. Often wrong.

Fix: Use action verbs. "Review," "Refactor," "Explain," "Debug," "Generate."

3. Missing Constraints Leading to Unwanted Output

Problem: You forgot to specify something obvious to you.

Effect: Model produces valid output that's useless for your context.

Fix: After drafting, ask: "What assumptions might the model make that are wrong?" Add constraints to prevent them.

4. Over-Constraining

Problem: So many constraints that no valid solution exists.

Effect: Model produces something that violates one constraint to satisfy others. Or hallucinates a solution.

Fix: Test your constraints. Are they all achievable simultaneously? If not, prioritize.

Quick Reference

Prompt Template

[CONTEXT - if needed]
You are [role]. You're working on [situation]. 
Given: [reference material]

[TASK - always]
[Action verb] [specific scope] [completion criteria].

[FORMAT - if needed]
Output as:
[structure specification]

[CONSTRAINTS - if needed]
- Must: [requirements]
- Must not: [prohibitions]
- Prefer: [soft preferences]

[EXAMPLES - if helpful]
Input: [example input]
Output: [example output]

Component Checklist

  • Task - Is there a clear action verb?
  • Context - Does the model have what it needs?
  • Format - Will the output be directly usable?
  • Constraints - Are the essential limits stated?
  • Examples - Would they help clarify ambiguity?
Task Type Required Components Optional
Explanation Task Context, Format
Code generation Task, Context Format, Constraints, Examples
Code review Task, Context (code) Format, Constraints
Debugging Task, Context (code + error) Format
Refactoring Task, Context, Constraints Format, Examples
Complex/novel tasks All five -
Written with: [phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε⚡φ Σ⚡μ c⚡h] | OODA