PROMPT TO PRODUCTION
Chapter 2 of 19 · 13 min read

Chapter 2: The Art and Science of Prompting


Quick Start: Three Prompts That Show the Difference (10 minutes)

Open claude.ai (or any AI chatbot) and try these three prompts in sequence. Watch how each one produces a dramatically different result.

Prompt 1 – The vague prompt:

Write code

You’ll likely get something useless – maybe a “Hello World” or a generic snippet. The AI did what you asked; you just didn’t ask for much.

Prompt 2 – The specific prompt:

Write a Python function that validates email addresses using regex. Include type hints and 3 test cases.

Better. You get a working function. But it might not match your coding style, handle all edge cases, or fit your project.

Prompt 3 – The CRISP prompt:

Context: I'm building a user registration system in Python 3.11+.
Role: Act as a senior Python developer who prioritizes readability.
Instructions: Write a function that validates email addresses. It should:
- Return True/False
- Handle edge cases (empty strings, multiple @ signs, missing domain)
- Include a docstring
Style: Use type hints, follow PEP 8, keep it under 30 lines.
Parameters: No external libraries -- regex only. Include 5 test cases.

Now you get exactly what you need: production-quality code matching your constraints.

The CRISP framework – Context, Role, Instructions, Style, Parameters – is the structure behind Prompt 3. The rest of this chapter teaches you to use it instinctively.


Core Concepts (15-20 minutes)

Why Prompt Engineering Matters

The quality of what you get from AI depends entirely on how well you ask for it. Prompt engineering is the skill of communicating effectively with AI to get consistent, useful results.

The difference between a novice and an expert AI user isn’t intelligence – it’s specificity. Vague prompts get vague results. Specific prompts get specific results.

The CRISP Framework

C – Context (Setting the Scene)

Context tells the AI what it needs to know to understand your request. Without it, AI makes assumptions that may be wrong.

Bad:  "Write a function to calculate totals"
Good: "I'm building an e-commerce shopping cart in TypeScript. Write a function to calculate order totals including tax and shipping."

What to include in context:

  • What you’re working on (project type, tech stack)
  • What problem you’re solving
  • Relevant background information
  • What you’ve tried already
  • Any constraints or requirements

Include just enough context to be clear, but not so much that you overwhelm. Focus on what’s relevant to the specific request.

R – Role (Defining Expertise)

Role tells the AI what perspective to bring. It adjusts depth, vocabulary, and approach.

"Act as a senior React developer with 10 years of experience"
"Act as a security expert reviewing code for vulnerabilities"
"Act as a patient teacher explaining to a complete beginner"

The difference role makes:

Without role: “Explain recursion” gets a textbook definition.

With role: “Act as a patient teacher explaining to a complete beginner. Explain recursion.” gets an analogy-rich explanation with simple examples that builds understanding gradually.

Popular roles for development tasks: Senior Software Engineer, Code Reviewer, Security Auditor, Performance Optimization Expert, API Designer, Database Architect, DevOps Engineer, Technical Writer.

I – Instructions (Clear, Specific Tasks)

The core of your prompt. Use action verbs, break complex requests into numbered steps, and specify deliverables.

Vague Specific
“Fix this code” “Refactor this code to use async/await, add error handling, and improve variable names”
“Make it faster” “Optimize this function from O(n^2) to O(n log n) using a different sorting algorithm”
“Add tests” “Write Jest unit tests covering: happy path, empty input, null values, and error conditions”

How to write great instructions:

  1. Use action verbs: “Create,” “Refactor,” “Analyze,” “Generate” – not passive language like “I need.”

  2. Break complex requests into steps: “Do three things: 1. Read the user.js file. 2. Identify all functions longer than 50 lines. 3. Suggest how to break them into smaller functions.”

  3. Specify deliverables: “Provide: the refactored code, a list of changes made, test cases, and documentation.”

  4. Include acceptance criteria: “The solution should handle edge cases (null, undefined, empty arrays), maintain backward compatibility, and run in O(n) time or better.”

S – Style (Tone and Format)

How the output should look or sound.

For code: Language version (ES6+, Python 3.10+), paradigm (functional, OOP), style guide (Airbnb, PEP 8), comment style (JSDoc, docstrings), naming conventions (camelCase, snake_case).

For writing: Tone (formal, casual, technical), structure (bullet points, paragraphs, headings), length (brief, detailed), audience (beginner, expert, mixed).

Example showing style impact:

Same request, different styles:

“Explain what a REST API is” (formal style) gets: “A REST API is an architectural style for designing networked applications utilizing HTTP methods…”

“Explain what a REST API is” (casual, with analogies) gets: “Think of a REST API like a waiter at a restaurant. You look at a menu, tell the waiter what you want, and they bring it back from the kitchen…”

Same content, completely different presentation.

P – Parameters (Constraints and Requirements)

The boundaries for your request.

  • Scope: “Under 100 lines,” “Exactly 5 examples,” “Summary under 200 words”
  • Technology: “Node.js 18+,” “No external packages,” “Compatible with React 18”
  • Performance: “O(n log n) or better,” “Load time under 100ms”
  • Constraints: “No breaking changes,” “Maintain backward compatibility,” “Must be accessible (WCAG 2.1 AA)”
  • Output format: “Return as JSON,” “Format as markdown table,” “TypeScript interface”

CRISP in Action: Full Example

Without CRISP:

"Make a login form"

With CRISP:

Context: I'm building a React SaaS app with TypeScript. Users log in with email/password, using JWT for auth.

Role: Senior React developer focused on security best practices.

Instructions: Create a login form component that:
1. Has email and password fields
2. Validates input before submission
3. Handles loading states during API calls
4. Shows error messages from the API
5. Redirects to /dashboard on success

Style: React hooks, TypeScript, Tailwind CSS. Functional patterns. Comments on security decisions.

Parameters: TypeScript strict mode, React 18, fetch API, ARIA labels for accessibility, under 200 lines, no form libraries.

The second prompt produces exactly what you need on the first try.


Try This Now: Rewrite a Vague Prompt

Take this vague prompt and rewrite it using CRISP:

"Write tests for my API"

Fill in each letter: what’s the Context (your project)? What Role should the AI take? What specific Instructions? What Style and Parameters?

Try both the vague and CRISP versions with an AI chatbot and compare results.


Chain of Thought Prompting

For complex problems, ask AI to show its reasoning step by step. This produces more accurate results – the same way “showing your work” helps humans make fewer mistakes.

Without chain of thought:

"Should we use REST or GraphQL for our new API?"

Response: "Use GraphQL." (No reasoning, hard to evaluate)

With chain of thought:

"Should we use REST or GraphQL for our new API? Think step by step, considering our requirements: mobile app with varying data needs, team experience with REST, and deadline in 6 weeks."

Response: "Let me think through this systematically:
Step 1: Data fetching patterns -- your mobile app needs different data on different screens, which favors GraphQL's flexible queries...
Step 2: Team experience -- REST is familiar, GraphQL has a learning curve of roughly 1-2 weeks...
Step 3: Timeline pressure -- 6 weeks is tight for learning a new paradigm..."

Trigger phrases: “Think step by step,” “Show your reasoning,” “Break this down systematically.”

Few-Shot Learning

Show AI examples of what you want before asking it to produce similar work. This is especially useful for matching your team’s coding style.

"Here's my coding style:

Example 1:
async function getUserById(id) {
  try {
    const user = await db.users.findOne({ id });
    return { success: true, data: user };
  } catch (error) {
    logger.error('getUserById failed:', error);
    return { success: false, error: error.message };
  }
}

Example 2:
async function createUser(userData) {
  try {
    const user = await db.users.create(userData);
    return { success: true, data: user };
  } catch (error) {
    logger.error('createUser failed:', error);
    return { success: false, error: error.message };
  }
}

Now write a function following the same pattern to delete a user by ID."

The AI will match your error handling pattern, return format, and logging style. Start with 2-3 examples; only add more if results aren’t consistent enough.

Iterative Refinement

Great prompting is often iterative. Start with a basic prompt, evaluate the output, identify gaps, and refine.

Attempt 1: “Write a user registration function”

  • Output: Basic function, no validation. Too simple.

Attempt 2: “Write a user registration function with email validation, password strength checking, and duplicate email prevention”

  • Output: Better, but passwords stored in plaintext. Security problem.

Attempt 3: “Write a user registration function with email validation (regex), password strength checking (min 8 chars, uppercase, number, symbol), duplicate email prevention, password hashing with bcrypt, error handling, and input sanitization”

  • Output: Exactly right.

Save your best prompts. Build a personal library for tasks you repeat.


Try This Now: Chain of Thought Debugging

Find a piece of code with a bug (or use this one):

def get_average(numbers):
    total = 0
    for n in numbers:
        total += n
    return total / len(numbers)

Prompt an AI: “Think step by step about what happens when this function is called with an empty list. What’s the bug and how should it be fixed?”

Compare the result to just asking “Fix this code.”


Deep Dive (Optional, for Mastery)

Advanced Prompt Patterns

Pattern 1: Code Generation Template

Context: [Your project and tech stack]
Role: Act as a senior [language/framework] developer
Instructions: Create [component/function] that:
- [Requirement 1]
- [Requirement 2]
- [Requirement 3]
Style: [Coding conventions, comment style]
Parameters: [Constraints, compatibility, limits]

Pattern 2: Code Review Template

Role: Act as a senior code reviewer focusing on [security/performance/maintainability]

Review this code for:
1. Bugs and logical errors
2. Security vulnerabilities
3. Performance issues
4. Best practices violations

For each issue: line number, severity (high/medium/low), description, suggested fix.

[CODE HERE]

Pattern 3: Debugging Template

Context: [What you're trying to do]
Problem: [The error or unexpected behavior]
What I've tried: [Your debugging attempts]

Think step by step:
1. What could cause this error?
2. Most likely culprits?
3. How to verify each hypothesis?
4. What's the fix?

[ERROR MESSAGE/CODE]

Pattern 4: Documentation Template

Role: Technical writer creating docs for developers.

For this code, create:
1. Overview/purpose
2. Function/class documentation
3. Parameter descriptions with types
4. Usage examples
5. Common pitfalls

Style: Clear, concise, with runnable examples.
Format: [JSDoc/docstring/markdown]

[CODE HERE]

Pattern 5: Test Generation Template

Context: [What the code does]
Role: QA engineer specializing in [unit/integration] testing

Generate [Jest/pytest/etc.] tests covering:
1. Happy path (expected use)
2. Edge cases (empty input, null, extremes)
3. Error conditions
4. Boundary conditions

Each test: descriptive name, arrange/act/assert structure, comment explaining what's tested.

[CODE TO TEST]

Multi-Role Prompting

Ask AI to consider a problem from multiple expert perspectives:

"You are three experts discussing this problem:

Expert 1: Senior Backend Engineer
Expert 2: Frontend Specialist
Expert 3: Security Auditor

Discuss: Should we use sessions or JWT tokens for authentication?

Have each expert provide their perspective, then summarize the consensus."

This surfaces trade-offs a single-perspective prompt would miss.

Constrained Output

When you need a specific format:

"Analyze this code and return findings as JSON:
{
  "issues": [{"line": number, "severity": string, "description": string}],
  "suggestions": [string],
  "overall_score": number
}"

Or: “Compare these frameworks as a markdown table with columns: Framework, Pros, Cons, Best For.”

Or: “Provide only the code, no explanations. Start directly with the code.”

Claude-Specific Tips

  1. Leverage extended context. Claude handles 200K+ tokens – paste entire modules for holistic analysis rather than snippet-by-snippet.
  2. Ask Claude to think aloud. “Before writing the code, explain your approach and any trade-offs you’re considering.”
  3. Use Claude’s honesty. “If you’re unsure about any part of this solution, say so and explain why.”
  4. Structured dialogue. “Let’s design a caching system. What questions do you have first?” Then answer its questions before asking for the solution.
  5. Claude Code file references. When using Claude Code, reference files directly: “Read src/auth/login.ts and suggest security improvements.”

When Things Go Wrong

Prompt produces garbage? Common failure modes:

  1. Too vague. AI guesses what you want and guesses wrong. Fix: add Context and specific Instructions.
  2. Too much context, not enough direction. You pasted 500 lines of code but didn’t say what you want done with it. Fix: always include a clear Instruction.
  3. Contradictory constraints. “Make it simple but handle every edge case in under 10 lines.” Fix: prioritize your constraints.
  4. Wrong role. Asking for beginner-friendly explanation but setting role as “senior architect.” Fix: match Role to your actual need.
  5. Style mismatch. AI writes OOP when you want functional, or formal when you want casual. Fix: specify Style explicitly.

The iteration loop when stuck:

  1. Look at what the AI produced.
  2. Identify the gap between what you got and what you wanted.
  3. Add the missing information to your prompt (don’t start from scratch).
  4. Re-prompt. Most problems are fixed in 1-2 iterations.

When AI refuses or hedges too much:

  • Rephrase to make your intent clear.
  • Break a large request into smaller, concrete steps.
  • Provide an example of the output format you expect.

Try This Now: Build Your First Prompt Template

Pick a task you do regularly (code review, writing tests, generating docs) and create a CRISP template for it. Then test it with a real example from your work. Save the template if it produces good results.


Chapter Checkpoint

5-Bullet Summary:

  1. Prompt engineering is the skill of communicating clearly with AI – specific prompts get specific results, vague prompts get garbage.
  2. The CRISP framework (Context, Role, Instructions, Style, Parameters) gives you a repeatable structure for writing effective prompts.
  3. Chain of Thought (“think step by step”) produces more accurate results for complex problems by forcing explicit reasoning.
  4. Few-Shot Learning (showing 2-3 examples) ensures AI matches your coding style, formatting, and conventions.
  5. Great prompting is iterative – start simple, evaluate output, refine the prompt, repeat.

You can now:

  • Write structured prompts using CRISP that produce useful results on the first try
  • Use Chain of Thought prompting for complex reasoning tasks
  • Provide examples (few-shot) to enforce consistent style
  • Debug prompts that aren’t working by identifying what’s missing
  • Build a personal library of reusable prompt templates

Next up: Chapter 3’s Quick Start has you using AI to write a real PR description from a code diff – applying these prompting skills to actual engineering workflows.


End of Chapter 2