Chapter 3: AI in the Engineering Workflow
Quick Start: Write a PR Description from a Diff (10 minutes)
Letâs use AI for a real engineering task right now.
Step 1: Get a diff. If you have a project with recent changes, run:
git diff HEAD~1If you donât have a project handy, use this sample diff:
diff --git a/src/api/users.ts b/src/api/users.ts
index abc1234..def5678 100644
--- a/src/api/users.ts
+++ b/src/api/users.ts
@@ -1,5 +1,8 @@
+import { rateLimit } from '../middleware/rateLimit';
+
router.get('/api/users/:id', authenticate, async (req, res) => {
- const user = await db.query('SELECT * FROM users WHERE id = ?', [req.params.id]);
+ const user = await db.query(
+ 'SELECT id, name, email, role FROM users WHERE id = ?',
+ [req.params.id]
+ );
if (!user) return res.status(404).json({ error: 'User not found' });
res.json(user);
});
+
+router.put('/api/users/:id', authenticate, rateLimit, async (req, res) => {
+ const { name, email } = req.body;
+ if (!name || !email) return res.status(400).json({ error: 'Name and email required' });
+ const updated = await db.query(
+ 'UPDATE users SET name = ?, email = ? WHERE id = ?',
+ [name, email, req.params.id]
+ );
+ res.json(updated);
+});Step 2: Prompt the AI. Paste this into Claude (or any AI chatbot):
I'm creating a PR. Here's my git diff:
[paste your diff here]
Generate a PR description with:
1. Summary (one paragraph: what changed and why)
2. Changes (bulleted list)
3. Testing notes (what to verify)
4. Reviewer checklist (3-5 items)
Step 3: Review the output. The AI should produce a structured, reviewer-friendly PR description in seconds. Edit it to add context only you know (the ticket number, why you made specific trade-offs), and you have a PR description that would have taken 10-15 minutes written in under 2 minutes.
This is the pattern for the entire chapter: take a real engineering task, give AI the right input, get a solid first draft, then add your domain knowledge.
Core Concepts (15-20 minutes)
Where AI Fits in Engineering Work
Engineers spend substantial time on tasks that are structured, repetitive, and text-heavy â exactly where AI excels. The goal isnât to replace your judgment but to eliminate the blank-page problem and handle the mechanical parts.
High-value AI tasks for engineers:
- Technical writing (RFCs, PR descriptions, post-mortems, code review comments)
- Debugging (log analysis, error interpretation, hypothesis generation)
- Documentation (API docs, READMEs, architecture decision records)
- Planning (task breakdowns, sprint planning, estimation)
Still requires your judgment:
- Architectural decisions (AI informs, you decide)
- Business logic validation (AI doesnât know your domain)
- Security-critical reviews (AI catches patterns, you catch context)
- Creative problem-solving for novel challenges
AI for Technical Communication
Pull Request Descriptions
Good PR descriptions explain the âwhy,â provide context for reviewers, and save everyone time. Most are rushed and unhelpful because writing them feels like overhead.
Prompt pattern:
I'm creating a PR. Here's my git diff summary:
- Files changed: [list]
- Commit messages: [list]
Generate a PR description with:
1. Summary (what and why)
2. Key changes (bulleted)
3. Testing notes
4. Reviewer checklist
Code Review Comments
AI helps you write review feedback thatâs specific, actionable, and constructive â not vague or harsh.
Prompt pattern:
I'm reviewing a colleague's code. Here's a section with issues:
[paste code]
Issues I see: [your observations]
Write a constructive review comment that:
- Is specific and actionable
- Explains why changes matter
- Provides an example of improved code
- Maintains positive tone
The AI will produce something like: âNice work on the core logic! A few suggestions to improve maintainability: 1. Add TypeScript types to catch errors at compile time. 2. Use === instead of == to prevent unexpected type coercion. 3. Consider a functional approach with filter/map for readability.â â followed by a concrete code example. This is the kind of review comment that helps your colleague learn, not just comply.
Incident Post-Mortems
After an incident youâre tired but need to document what happened. AI drafts the structure; you fill in the details.
Prompt pattern:
Help me write a blameless post-mortem:
Timeline: [events with timestamps]
Impact: [duration, failed requests, complaints]
Create sections:
1. Incident Summary
2. Timeline table
3. Root Cause Analysis (5 Whys)
4. Action Items (with owners and due dates)
5. Lessons Learned
The AI generates a professional, structured document with proper timeline tables, a 5 Whys chain that traces from symptom to root cause, and categorized lessons learned. You then fill in the team-specific details: who did what, which action items have real owners, and what the team actually learned versus what sounds good on paper.
Technical RFCs
RFCs require structured thinking and comprehensive coverage. They often take 2-4 hours to write properly. AI generates the skeleton in minutes; you add the domain expertise in 30-45 minutes.
Prompt pattern:
Context: [problem, current situation, constraints]
Role: Senior architect
Create an RFC with:
1. Summary (problem + proposed solution)
2. Motivation
3. Detailed Design
4. Alternatives Considered (and why rejected)
5. Migration Strategy
6. Risks and Mitigations
7. Open Questions
For example, give AI: âI need an RFC for adding a caching layer to our API. Current situation: REST API handling 10K req/min, 250ms average response, PostgreSQL is the bottleneck, running Node.js.â The AI produces a complete RFC with architecture diagrams in ASCII art, cache-aside pattern details, TTL recommendations, and an alternatives section comparing Redis vs. Memcached vs. application-level caching. You then adjust the design to match your actual infrastructure and team capabilities.
Try This Now: Write a Code Review Comment
Take this code and write a constructive review comment using AI:
function processUsers(users) {
let result = [];
for (let i = 0; i < users.length; i++) {
if (users[i].active == true) {
result.push({
name: users[i].name,
email: users[i].email,
processed: true
});
}
}
return result;
}Prompt the AI: âIâm reviewing this code. Issues: no TypeScript types, uses == instead of ===, could use filter/map, no error handling. Write a constructive review comment with a code example of the improved version.â
AI for Debugging and Analysis
Log File Analysis
Production logs contain thousands of lines. AI finds the patterns youâd spend an hour hunting for.
Analyze these application logs and identify issues:
[paste logs]
Identify:
1. What's failing and when it started
2. Root cause hypothesis
3. Pattern in the errors
4. Recommended investigation steps
5. Immediate mitigation suggestions
Error Message Interpretation
Cryptic stack traces become actionable fixes:
Explain this error and suggest fixes:
[paste error and stack trace]
Context: [when it happens, what triggered it]
AI excels here because it has seen thousands of variations of common
errors. It can tell you that
TypeError: Cannot read properties of undefined (reading 'map')
is almost certainly a component rendering before async data arrives, and
offer multiple solution patterns:
- Quick fix: Optional chaining
(
users?.map(...)) - Better fix: Initialize state with empty array
(
useState<User[]>([])) - Best fix: Proper loading state management with explicit loading/error/data states
The AI explains why the intermittent nature points to a race condition (component renders before API response) and suggests the solution that prevents the issue at its source rather than just catching it.
Performance Analysis
Paste a slow query with table sizes and index information. AI will identify why itâs slow, recommend indexes, suggest query restructuring, and estimate the improvement:
Analyze this slow database query:
[query]
Table info:
- [table]: [row count]
- Indexes: [list]
Explain why it's slow and how to fix it.
AI is particularly effective at performance analysis because it can systematically check for common bottlenecks: full table scans, missing indexes on join columns, large result sets being sorted without index support, and N+1 query patterns. It can also suggest query restructuring (like using CTEs to filter early) and estimate the expected improvement from each optimization.
Try This Now: Debug an Error
Paste this error into an AI and ask it to explain the cause and suggest fixes:
TypeError: Cannot read properties of undefined (reading 'map')
at UserList (/app/src/components/UserList.tsx:15:23)
at renderWithHooks (react-dom.development.js:14985:18)
Context: This happens intermittently. The component fetches user data from an API.
Compare the AIâs explanation to what youâd figure out on your own. How quickly did it identify the race condition?
AI for Documentation
The Pattern: Use AI to generate first drafts of documentation, then add your domain expertise. This flips documentation from a task you avoid into something that ships with every PR.
API Documentation from Code
Give AI your endpoint code and ask it to generate documentation:
Generate API documentation for this endpoint. Include:
- HTTP method and path
- Request parameters and body schema
- All response codes with example bodies
- Authentication requirements
- Example curl request
[paste endpoint code]
The AI reads the code and infers the request/response schemas, error conditions, and authentication requirements. You review and correct anything it got wrong (especially default values and edge cases in error handling).
README Generation
Describe your projectâs structure and features, and ask AI to generate a README with installation, quick start, API reference, and configuration sections. The result is a professional README that follows conventions (badges, table of contents, contributing guidelines) and would have taken an hour to write from scratch.
Architecture Decision Records (ADRs)
Tell AI the decision, context, and alternatives considered. It generates a properly structured ADR:
Create an ADR for this decision:
Decision: PostgreSQL over MongoDB
Context: Financial app, need ACID transactions, team has SQL experience
Alternatives: PostgreSQL (chosen), MongoDB, MySQL
The AI produces an ADR with Status, Context, Decision, Alternatives Considered (with pros/cons for each), Consequences (positive, negative, neutral), and review notes. This captures the âwhyâ behind decisions that would otherwise be lost in Slack threads.
Continuous Documentation Workflow:
Every PR:
1. Code changes -> AI generates/updates docs
2. You review AI docs for accuracy
3. Docs ship with code (never outdated)
This workflow means documentation is never more than one PR behind. The key insight: AI makes the first draft so cheap that documentation stops being a task you skip.
AI-Augmented Workflows
The Code Review Workflow
Without AI: Open PR, read all files (20-30 min), run tests (5 min), write comments (15-20 min). Total: 45-60 minutes.
With AI: AI summarizes PR and flags issues (2 min), you focus review on flagged areas + business logic (10-15 min), AI helps draft comments (2 min), you refine and submit (5 min). Total: 20-25 minutes.
The Debugging Workflow
- Describe symptoms to AI.
- AI generates ranked hypotheses with verification steps.
- You gather data and share findings.
- AI narrows to root cause and suggests a fix.
- You fix and verify.
This works because AI can hold your entire error context in memory and pattern-match against thousands of similar bugs it has seen in training data.
The Sprint Planning Workflow
Break down this user story into technical tasks:
"As a user, I can reset my password via email"
Stack: [your tech stack]
For each task: description, complexity (S/M/L), dependencies, acceptance criteria.
AI produces a task breakdown that would take 30 minutes of discussion in 2 minutes. You adjust estimates based on your knowledge of the codebase.
Deep Dive (Optional, for Mastery)
Advanced Debugging Patterns
Multi-Layer Analysis: When debugging crosses system boundaries, provide AI with logs from multiple services:
I have an intermittent 503 error. Here are logs from three services:
API Gateway: [logs]
Auth Service: [logs]
User Service: [logs]
Correlate timestamps and identify where the failure originates.
The Debugging Workflow in Detail:
Hereâs how an AI-assisted debugging session flows:
Step 1 â You describe the symptoms: âUsers report 500 errors on /api/checkout. Started after yesterdayâs deployment. Only affects some users.â
Step 2 â AI generates hypotheses: âBased on symptoms, likely causes:
- New code has null pointer issue (80% likely)
- Database migration incomplete (15% likely)
- External service rate limiting (5% likely)
To verify, check: logs for stack trace, which users are affected (new vs. existing), deployment diff.â
Step 3 â You gather data and share it: âAll affected users created after Jan 10. Stack trace shows null in UserProfile.address.â
Step 4 â AI identifies root cause: âRoot cause: New users donât have address field. Yesterdayâs deploy assumes address exists. Fix: Add null check in checkout.ts:45. Prevention: Add migration to add default address.â
Step 5 â You fix and verify.
The speed gain comes from Steps 2 and 4: AI generates hypotheses and narrows root causes faster than manual investigation because it can pattern-match against thousands of similar bugs.
Performance Profiling Interpretation: Paste flame graph summaries or profiler output and ask AI to identify the hotspots and suggest optimizations. AI is particularly good at recognizing N+1 query patterns, unnecessary serialization, and blocking I/O in async code.
Writing Effective Post-Mortems
Beyond the template, AI can help with the hardest parts:
Root Cause vs. Proximate Cause: âIâve identified that the deployment caused the outage. Help me trace deeper â what allowed a problematic deployment to reach production? Apply the 5 Whys technique.â
Actionable Action Items: âThese are our proposed action items: [list]. For each, assess whether it addresses the root cause or just the symptom. Suggest improvements.â
Building a Personal Workflow Library
Over time, build prompt templates for your recurring tasks:
| Task | Frequency | Template |
|---|---|---|
| PR description | Daily | CRISP template with diff input |
| Code review | Daily | Review template with severity levels |
| Bug investigation | Weekly | Debug template with hypothesis ranking |
| Sprint planning | Biweekly | Story breakdown template |
| Post-mortem | Monthly | Blameless post-mortem template |
| RFC | Quarterly | Architecture RFC template |
Save these in your notes app, a team wiki, or a personal prompts repository. The 30 minutes spent building templates saves hours every week.
Starter Scenarios (If You Donât Have a Codebase Yet)
If youâre reading this book before starting a project, use these practice scenarios:
Scenario A: The Todo API. Imagine youâre building a REST API for a todo list app with Node.js and PostgreSQL. Use AI to:
- Generate a PR description for adding a âmark completeâ endpoint.
- Write a code review comment on a function that doesnât validate input.
- Draft an RFC for adding real-time updates via WebSockets.
Scenario B: The E-Commerce Cart. Imagine youâre working on a Python Flask e-commerce app. Use AI to:
- Debug a âcart total is wrongâ error from sample logs you create.
- Generate API documentation for a checkout endpoint.
- Break down âAs a user, I can apply a discount codeâ into tasks.
These exercises work because the AI doesnât need your actual codebase â it generates plausible, educational examples from your descriptions.
When Things Go Wrong
AI produces plausible but wrong documentation: This is the biggest risk. AI-generated docs that look right but contain subtle errors are worse than no docs â they create false confidence.
Mitigations:
- Always review generated docs against actual code behavior. Donât just skim; test the examples.
- Watch for invented parameters. AI may document function parameters or API fields that donât exist.
- Check default values. AI often guesses defaults rather than reading them from code.
- Verify error responses. AI tends to generate idealized error handling that may not match your actual implementation.
AI produces wrong debug hypotheses: AI ranks hypotheses by pattern frequency in its training data, not by your specific system. If your architecture is unusual, the most common cause isnât necessarily yours.
Mitigations:
- Provide as much system-specific context as possible.
- Treat AI hypotheses as a starting checklist, not a diagnosis.
- If the top hypothesis doesnât pan out, tell the AI what you found and ask it to re-rank.
AI-generated summaries miss critical details: When summarizing PRs, threads, or incidents, AI may omit the detail that matters most to your team.
Mitigations:
- Always read AI summaries alongside the source material, not instead of it.
- Tell the AI what to focus on: âSummarize this PR diff, focusing on security implications and breaking changes.â
Measuring Your Productivity
Track AI usage for one week to understand where it helps most:
## Weekly AI Productivity Log
| Task | Time Without AI | Time With AI | Savings |
|------|-----------------|--------------|---------|
| PR descriptions | 15 min each | 3 min each | ~12 min each |
| Code review (per PR) | 45 min | 25 min | ~20 min |
| Debug session | 60 min | 30 min | ~30 min |
| Documentation | 2 hours | 30 min | ~90 min |
### What Worked Well
[note specific wins]
### What to Improve
[note where AI fell short]Measure outcomes (cycle time, bug rate, documentation coverage), not AI usage itself. The goal is better engineering, not more AI.
Chapter Checkpoint
5-Bullet Summary:
- AI eliminates the blank-page problem for technical writing â PR descriptions, RFCs, post-mortems, and code review comments all benefit from AI-generated first drafts.
- For debugging, AI analyzes logs, interprets error messages, and generates ranked hypotheses faster than manual investigation.
- Documentation debt shrinks when AI generates first drafts from code â but you must review against actual behavior, not just skim.
- The biggest productivity gains come from integrating AI into repeatable workflows (code review, debugging, planning), not using it occasionally.
- AI is a multiplier for your expertise, not a replacement â it handles the mechanical parts while you apply domain knowledge and judgment.
You can now:
- Generate PR descriptions from diffs in under 2 minutes
- Write constructive code review comments with AI assistance
- Use AI for structured debugging (log analysis, error interpretation, hypothesis generation)
- Draft technical documents (RFCs, ADRs, post-mortems) with AI
- Design AI-augmented workflows for recurring engineering tasks
- Identify when AI documentation is plausible but wrong
Next up: Chapter 4âs Quick Start walks you through installing Claude Code and running your first AI-assisted command in your own terminal â bringing everything from Chapters 1-3 directly into your development environment.
End of Chapter 3
PROMPT TO PRODUCTION