Chapter 2: The Art and Science of Prompting
Quick Start: Three Prompts That Show the Difference (10 minutes)
Open claude.ai (or any AI chatbot) and try these three prompts in sequence. Watch how each one produces a dramatically different result.
Prompt 1 â The vague prompt:
Write code
Youâll likely get something useless â maybe a âHello Worldâ or a generic snippet. The AI did what you asked; you just didnât ask for much.
Prompt 2 â The specific prompt:
Write a Python function that validates email addresses using regex. Include type hints and 3 test cases.
Better. You get a working function. But it might not match your coding style, handle all edge cases, or fit your project.
Prompt 3 â The CRISP prompt:
Context: I'm building a user registration system in Python 3.11+.
Role: Act as a senior Python developer who prioritizes readability.
Instructions: Write a function that validates email addresses. It should:
- Return True/False
- Handle edge cases (empty strings, multiple @ signs, missing domain)
- Include a docstring
Style: Use type hints, follow PEP 8, keep it under 30 lines.
Parameters: No external libraries -- regex only. Include 5 test cases.
Now you get exactly what you need: production-quality code matching your constraints.
The CRISP framework â Context, Role, Instructions, Style, Parameters â is the structure behind Prompt 3. The rest of this chapter teaches you to use it instinctively.
Core Concepts (15-20 minutes)
Why Prompt Engineering Matters
The quality of what you get from AI depends entirely on how well you ask for it. Prompt engineering is the skill of communicating effectively with AI to get consistent, useful results.
The difference between a novice and an expert AI user isnât intelligence â itâs specificity. Vague prompts get vague results. Specific prompts get specific results.
The CRISP Framework
C â Context (Setting the Scene)
Context tells the AI what it needs to know to understand your request. Without it, AI makes assumptions that may be wrong.
Bad: "Write a function to calculate totals"
Good: "I'm building an e-commerce shopping cart in TypeScript. Write a function to calculate order totals including tax and shipping."
What to include in context:
- What youâre working on (project type, tech stack)
- What problem youâre solving
- Relevant background information
- What youâve tried already
- Any constraints or requirements
Include just enough context to be clear, but not so much that you overwhelm. Focus on whatâs relevant to the specific request.
R â Role (Defining Expertise)
Role tells the AI what perspective to bring. It adjusts depth, vocabulary, and approach.
"Act as a senior React developer with 10 years of experience"
"Act as a security expert reviewing code for vulnerabilities"
"Act as a patient teacher explaining to a complete beginner"
The difference role makes:
Without role: âExplain recursionâ gets a textbook definition.
With role: âAct as a patient teacher explaining to a complete beginner. Explain recursion.â gets an analogy-rich explanation with simple examples that builds understanding gradually.
Popular roles for development tasks: Senior Software Engineer, Code Reviewer, Security Auditor, Performance Optimization Expert, API Designer, Database Architect, DevOps Engineer, Technical Writer.
I â Instructions (Clear, Specific Tasks)
The core of your prompt. Use action verbs, break complex requests into numbered steps, and specify deliverables.
| Vague | Specific |
|---|---|
| âFix this codeâ | âRefactor this code to use async/await, add error handling, and improve variable namesâ |
| âMake it fasterâ | âOptimize this function from O(n^2) to O(n log n) using a different sorting algorithmâ |
| âAdd testsâ | âWrite Jest unit tests covering: happy path, empty input, null values, and error conditionsâ |
How to write great instructions:
Use action verbs: âCreate,â âRefactor,â âAnalyze,â âGenerateâ â not passive language like âI need.â
Break complex requests into steps: âDo three things: 1. Read the user.js file. 2. Identify all functions longer than 50 lines. 3. Suggest how to break them into smaller functions.â
Specify deliverables: âProvide: the refactored code, a list of changes made, test cases, and documentation.â
Include acceptance criteria: âThe solution should handle edge cases (null, undefined, empty arrays), maintain backward compatibility, and run in O(n) time or better.â
S â Style (Tone and Format)
How the output should look or sound.
For code: Language version (ES6+, Python 3.10+), paradigm (functional, OOP), style guide (Airbnb, PEP 8), comment style (JSDoc, docstrings), naming conventions (camelCase, snake_case).
For writing: Tone (formal, casual, technical), structure (bullet points, paragraphs, headings), length (brief, detailed), audience (beginner, expert, mixed).
Example showing style impact:
Same request, different styles:
âExplain what a REST API isâ (formal style) gets: âA REST API is an architectural style for designing networked applications utilizing HTTP methodsâŚâ
âExplain what a REST API isâ (casual, with analogies) gets: âThink of a REST API like a waiter at a restaurant. You look at a menu, tell the waiter what you want, and they bring it back from the kitchenâŚâ
Same content, completely different presentation.
P â Parameters (Constraints and Requirements)
The boundaries for your request.
- Scope: âUnder 100 lines,â âExactly 5 examples,â âSummary under 200 wordsâ
- Technology: âNode.js 18+,â âNo external packages,â âCompatible with React 18â
- Performance: âO(n log n) or better,â âLoad time under 100msâ
- Constraints: âNo breaking changes,â âMaintain backward compatibility,â âMust be accessible (WCAG 2.1 AA)â
- Output format: âReturn as JSON,â âFormat as markdown table,â âTypeScript interfaceâ
CRISP in Action: Full Example
Without CRISP:
"Make a login form"
With CRISP:
Context: I'm building a React SaaS app with TypeScript. Users log in with email/password, using JWT for auth.
Role: Senior React developer focused on security best practices.
Instructions: Create a login form component that:
1. Has email and password fields
2. Validates input before submission
3. Handles loading states during API calls
4. Shows error messages from the API
5. Redirects to /dashboard on success
Style: React hooks, TypeScript, Tailwind CSS. Functional patterns. Comments on security decisions.
Parameters: TypeScript strict mode, React 18, fetch API, ARIA labels for accessibility, under 200 lines, no form libraries.
The second prompt produces exactly what you need on the first try.
Try This Now: Rewrite a Vague Prompt
Take this vague prompt and rewrite it using CRISP:
"Write tests for my API"
Fill in each letter: whatâs the Context (your project)? What Role should the AI take? What specific Instructions? What Style and Parameters?
Try both the vague and CRISP versions with an AI chatbot and compare results.
Chain of Thought Prompting
For complex problems, ask AI to show its reasoning step by step. This produces more accurate results â the same way âshowing your workâ helps humans make fewer mistakes.
Without chain of thought:
"Should we use REST or GraphQL for our new API?"
Response: "Use GraphQL." (No reasoning, hard to evaluate)
With chain of thought:
"Should we use REST or GraphQL for our new API? Think step by step, considering our requirements: mobile app with varying data needs, team experience with REST, and deadline in 6 weeks."
Response: "Let me think through this systematically:
Step 1: Data fetching patterns -- your mobile app needs different data on different screens, which favors GraphQL's flexible queries...
Step 2: Team experience -- REST is familiar, GraphQL has a learning curve of roughly 1-2 weeks...
Step 3: Timeline pressure -- 6 weeks is tight for learning a new paradigm..."
Trigger phrases: âThink step by step,â âShow your reasoning,â âBreak this down systematically.â
Few-Shot Learning
Show AI examples of what you want before asking it to produce similar work. This is especially useful for matching your teamâs coding style.
"Here's my coding style:
Example 1:
async function getUserById(id) {
try {
const user = await db.users.findOne({ id });
return { success: true, data: user };
} catch (error) {
logger.error('getUserById failed:', error);
return { success: false, error: error.message };
}
}
Example 2:
async function createUser(userData) {
try {
const user = await db.users.create(userData);
return { success: true, data: user };
} catch (error) {
logger.error('createUser failed:', error);
return { success: false, error: error.message };
}
}
Now write a function following the same pattern to delete a user by ID."
The AI will match your error handling pattern, return format, and logging style. Start with 2-3 examples; only add more if results arenât consistent enough.
Iterative Refinement
Great prompting is often iterative. Start with a basic prompt, evaluate the output, identify gaps, and refine.
Attempt 1: âWrite a user registration functionâ
- Output: Basic function, no validation. Too simple.
Attempt 2: âWrite a user registration function with email validation, password strength checking, and duplicate email preventionâ
- Output: Better, but passwords stored in plaintext. Security problem.
Attempt 3: âWrite a user registration function with email validation (regex), password strength checking (min 8 chars, uppercase, number, symbol), duplicate email prevention, password hashing with bcrypt, error handling, and input sanitizationâ
- Output: Exactly right.
Save your best prompts. Build a personal library for tasks you repeat.
Try This Now: Chain of Thought Debugging
Find a piece of code with a bug (or use this one):
def get_average(numbers):
total = 0
for n in numbers:
total += n
return total / len(numbers)Prompt an AI: âThink step by step about what happens when this function is called with an empty list. Whatâs the bug and how should it be fixed?â
Compare the result to just asking âFix this code.â
Deep Dive (Optional, for Mastery)
Advanced Prompt Patterns
Pattern 1: Code Generation Template
Context: [Your project and tech stack]
Role: Act as a senior [language/framework] developer
Instructions: Create [component/function] that:
- [Requirement 1]
- [Requirement 2]
- [Requirement 3]
Style: [Coding conventions, comment style]
Parameters: [Constraints, compatibility, limits]
Pattern 2: Code Review Template
Role: Act as a senior code reviewer focusing on [security/performance/maintainability]
Review this code for:
1. Bugs and logical errors
2. Security vulnerabilities
3. Performance issues
4. Best practices violations
For each issue: line number, severity (high/medium/low), description, suggested fix.
[CODE HERE]
Pattern 3: Debugging Template
Context: [What you're trying to do]
Problem: [The error or unexpected behavior]
What I've tried: [Your debugging attempts]
Think step by step:
1. What could cause this error?
2. Most likely culprits?
3. How to verify each hypothesis?
4. What's the fix?
[ERROR MESSAGE/CODE]
Pattern 4: Documentation Template
Role: Technical writer creating docs for developers.
For this code, create:
1. Overview/purpose
2. Function/class documentation
3. Parameter descriptions with types
4. Usage examples
5. Common pitfalls
Style: Clear, concise, with runnable examples.
Format: [JSDoc/docstring/markdown]
[CODE HERE]
Pattern 5: Test Generation Template
Context: [What the code does]
Role: QA engineer specializing in [unit/integration] testing
Generate [Jest/pytest/etc.] tests covering:
1. Happy path (expected use)
2. Edge cases (empty input, null, extremes)
3. Error conditions
4. Boundary conditions
Each test: descriptive name, arrange/act/assert structure, comment explaining what's tested.
[CODE TO TEST]
Multi-Role Prompting
Ask AI to consider a problem from multiple expert perspectives:
"You are three experts discussing this problem:
Expert 1: Senior Backend Engineer
Expert 2: Frontend Specialist
Expert 3: Security Auditor
Discuss: Should we use sessions or JWT tokens for authentication?
Have each expert provide their perspective, then summarize the consensus."
This surfaces trade-offs a single-perspective prompt would miss.
Constrained Output
When you need a specific format:
"Analyze this code and return findings as JSON:
{
"issues": [{"line": number, "severity": string, "description": string}],
"suggestions": [string],
"overall_score": number
}"
Or: âCompare these frameworks as a markdown table with columns: Framework, Pros, Cons, Best For.â
Or: âProvide only the code, no explanations. Start directly with the code.â
Claude-Specific Tips
- Leverage extended context. Claude handles 200K+ tokens â paste entire modules for holistic analysis rather than snippet-by-snippet.
- Ask Claude to think aloud. âBefore writing the code, explain your approach and any trade-offs youâre considering.â
- Use Claudeâs honesty. âIf youâre unsure about any part of this solution, say so and explain why.â
- Structured dialogue. âLetâs design a caching system. What questions do you have first?â Then answer its questions before asking for the solution.
- Claude Code file references. When using Claude Code, reference files directly: âRead src/auth/login.ts and suggest security improvements.â
When Things Go Wrong
Prompt produces garbage? Common failure modes:
- Too vague. AI guesses what you want and guesses wrong. Fix: add Context and specific Instructions.
- Too much context, not enough direction. You pasted 500 lines of code but didnât say what you want done with it. Fix: always include a clear Instruction.
- Contradictory constraints. âMake it simple but handle every edge case in under 10 lines.â Fix: prioritize your constraints.
- Wrong role. Asking for beginner-friendly explanation but setting role as âsenior architect.â Fix: match Role to your actual need.
- Style mismatch. AI writes OOP when you want functional, or formal when you want casual. Fix: specify Style explicitly.
The iteration loop when stuck:
- Look at what the AI produced.
- Identify the gap between what you got and what you wanted.
- Add the missing information to your prompt (donât start from scratch).
- Re-prompt. Most problems are fixed in 1-2 iterations.
When AI refuses or hedges too much:
- Rephrase to make your intent clear.
- Break a large request into smaller, concrete steps.
- Provide an example of the output format you expect.
Try This Now: Build Your First Prompt Template
Pick a task you do regularly (code review, writing tests, generating docs) and create a CRISP template for it. Then test it with a real example from your work. Save the template if it produces good results.
Chapter Checkpoint
5-Bullet Summary:
- Prompt engineering is the skill of communicating clearly with AI â specific prompts get specific results, vague prompts get garbage.
- The CRISP framework (Context, Role, Instructions, Style, Parameters) gives you a repeatable structure for writing effective prompts.
- Chain of Thought (âthink step by stepâ) produces more accurate results for complex problems by forcing explicit reasoning.
- Few-Shot Learning (showing 2-3 examples) ensures AI matches your coding style, formatting, and conventions.
- Great prompting is iterative â start simple, evaluate output, refine the prompt, repeat.
You can now:
- Write structured prompts using CRISP that produce useful results on the first try
- Use Chain of Thought prompting for complex reasoning tasks
- Provide examples (few-shot) to enforce consistent style
- Debug prompts that arenât working by identifying whatâs missing
- Build a personal library of reusable prompt templates
Next up: Chapter 3âs Quick Start has you using AI to write a real PR description from a code diff â applying these prompting skills to actual engineering workflows.
End of Chapter 2
PROMPT TO PRODUCTION