Chapter 16: Production Development Workflows
Learning Objectives
By the end of this chapter, you will be able to:
- Integrate Claude Code into CI/CD pipelines using GitHub Actions
- Automate PR reviews and code quality checks with Claude
- Apply production security fundamentals
- Choose appropriate deployment strategies
- Troubleshoot production issues with AI assistance
Quick Start: Claude Code PR Review Action (5-10 minutes)
Let’s set up a GitHub Action that uses Claude Code to automatically review pull requests.
Step 1: Create the workflow file
# .github/workflows/claude-pr-review.yml
name: Claude Code PR Review
on:
pull_request:
types: [opened, synchronize]
permissions:
contents: read
pull-requests: write
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Get changed files
id: changed
run: |
FILES=$(git diff --name-only origin/${{ github.base_ref }}...HEAD | head -50)
echo "files<<EOF" >> $GITHUB_OUTPUT
echo "$FILES" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Claude Code Review
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
Review the changes in this PR. Focus on:
- Bug risks and logic errors
- Security concerns
- Performance issues
- Code style and readability
Provide actionable feedback as PR comments.Step 2: Add your API key to GitHub Secrets
Go to your repository Settings > Secrets and variables >
Actions > New repository secret. Add ANTHROPIC_API_KEY
with your Anthropic API key.
Step 3: Open a PR and watch it work
Create a branch, make changes, and open a pull request. The action triggers automatically and posts review comments.
What just happened? You set up an automated AI code reviewer. Every PR now gets a first-pass review from Claude before a human looks at it. This catches bugs, security issues, and style problems early.
Try This Now: Create the workflow file in any repository you own. Push a small change (even a README edit) as a PR and verify the action runs. Check the Actions tab for logs if it fails.
Core Concepts (15-20 minutes)
16.1 CI/CD Integration with Claude Code
CI/CD (Continuous Integration/Continuous Deployment) automates the path from code commit to production. Claude Code fits into this pipeline at multiple points.
Developer writes code
|
v
Push to branch --> CI triggers
|
v
Claude reviews PR --> Automated tests run --> Build succeeds
|
v
Merge to main --> Deploy to staging --> Smoke tests pass
|
v
Deploy to production --> Monitor
A Complete GitHub Actions Pipeline
Here is a practical CI/CD workflow that integrates Claude Code for review alongside standard testing and deployment:
# .github/workflows/ci-cd.yml
name: CI/CD Pipeline
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
env:
NODE_VERSION: '20.x'
jobs:
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Lint
run: npm run lint
- name: Type check
run: npm run type-check
- name: Unit tests
run: npm test -- --coverage
- name: Upload coverage
uses: codecov/codecov-action@v4
with:
files: ./coverage/lcov.info
build:
name: Build
runs-on: ubuntu-latest
needs: test
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install and build
run: npm ci && npm run build
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: build
path: dist/
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/develop'
steps:
- uses: actions/checkout@v4
- name: Download build
uses: actions/download-artifact@v4
with:
name: build
path: dist/
- name: Deploy to staging
run: npm run deploy:staging
env:
STAGING_API_KEY: ${{ secrets.STAGING_API_KEY }}
- name: Smoke tests
run: npm run test:smoke
env:
BASE_URL: ${{ secrets.STAGING_URL }}
deploy-production:
name: Deploy to Production
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main'
environment:
name: production
url: https://app.example.com
steps:
- uses: actions/checkout@v4
- name: Download build
uses: actions/download-artifact@v4
with:
name: build
path: dist/
- name: Deploy to production
run: npm run deploy:production
env:
PRODUCTION_API_KEY: ${{ secrets.PRODUCTION_API_KEY }}
- name: Smoke tests
run: npm run test:smoke
env:
BASE_URL: ${{ secrets.PRODUCTION_URL }}Key design decisions:
- Test runs first – nothing deploys if tests fail
- Build artifacts are shared between jobs (no rebuilding)
- Staging deploys from
develop, production frommain - Smoke tests run after every deployment to catch environment-specific issues
- GitHub environments protect production with required approvals
16.2 PR Automation with Claude Code
Beyond basic reviews, Claude Code can automate several PR-related tasks.
Automated PR Description Generation:
You: "Generate a PR description from the git diff"
Claude: Looking at your changes...
[Using Tool: Bash]
git diff main...HEAD --stat
git log main..HEAD --oneline
Based on 4 commits touching 8 files, here's your PR description:
## Summary
- Add rate limiting middleware to API endpoints
- Implement Redis-based token bucket algorithm
- Add integration tests for rate limiting behavior
## Changes
- `src/middleware/rateLimiter.ts` (new): Token bucket rate limiter
- `src/config/limits.ts` (new): Rate limit configuration per endpoint
- `tests/integration/rateLimiter.test.ts` (new): 12 test cases
- `src/app.ts` (modified): Register middleware
## Testing
- Unit tests: 15 new, all passing
- Integration tests: 12 new, all passing
- Manual testing: Verified 429 responses after limit exceeded
Automated Changelog Generation:
You: "What changed since the last release tag?"
Claude: Let me check the commits since the last tag.
[Using Tool: Bash]
git log v1.2.0..HEAD --oneline --no-merges
Here's the changelog:
### Added
- Rate limiting for API endpoints (#142)
- Health check endpoint (#138)
- Request logging middleware (#135)
### Fixed
- Memory leak in WebSocket handler (#141)
- Incorrect timezone in audit logs (#139)
### Changed
- Upgraded Express from 4.18 to 4.21 (#140)
Claude Code as a CI Quality Gate:
You can use Claude Code in CI to enforce standards beyond what linters catch:
# .github/workflows/quality-gate.yml
name: AI Quality Gate
on:
pull_request:
jobs:
quality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Claude Code Quality Check
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
prompt: |
Check this PR for:
1. Functions longer than 50 lines
2. Missing error handling in async code
3. Hardcoded secrets or credentials
4. Missing input validation on API endpoints
If any issues found, comment on the specific lines.16.3 Production Security Basics
Before deploying, run through these essential security checks. Ask Claude to audit your project:
You: "Audit this project for production security issues"
Claude: I'll check the critical areas.
[Using Tool: Bash]
grep -r "password\|secret\|api_key\|token" --include="*.ts" --include="*.js" -l
[Using Tool: Bash]
npm audit --audit-level=moderate
[Using Tool: Read]
Reading .env, .gitignore, package.json...
Security findings:
CRITICAL:
- src/config/db.ts line 12: Hardcoded database password
- .env file is not in .gitignore
HIGH:
- No rate limiting on /api/auth/login endpoint
- JWT tokens never expire (missing expiresIn)
- 3 npm vulnerabilities (2 high, 1 moderate)
MEDIUM:
- CORS allows all origins (Access-Control-Allow-Origin: *)
- No helmet.js for security headers
- Missing input validation on /api/users POST
Recommended fixes:
1. Move credentials to environment variables
2. Add .env to .gitignore immediately
3. Add rate limiting to auth endpoints
4. Set JWT expiration: jwt.sign(payload, secret, { expiresIn: '1h' })
5. Run npm audit fix
6. Restrict CORS to your domains
7. Add helmet middleware
Production Security Checklist (keep this handy):
16.4 Deployment Strategies Overview
When deploying to production, you have several options. Here is a quick comparison:
Rolling Deployment (simplest):
Replace instances one by one
Old: [v1] [v1] [v1] [v1]
-> [v2] [v1] [v1] [v1]
-> [v2] [v2] [v1] [v1]
-> [v2] [v2] [v2] [v1]
-> [v2] [v2] [v2] [v2]
Pros: Simple, no extra infrastructure
Cons: Both versions run simultaneously during rollout
Blue-Green Deployment (zero-downtime switch):
Blue (current): [v1] [v1] [v1] [v1] <-- 100% traffic
Green (new): [v2] [v2] [v2] [v2] <-- 0% traffic
After testing green:
Blue: [v1] [v1] [v1] [v1] <-- 0% traffic
Green: [v2] [v2] [v2] [v2] <-- 100% traffic (switch!)
Pros: Instant rollback (switch back), test before traffic hits
Cons: Requires double infrastructure
Canary Deployment (gradual rollout):
Stage 1: v1 gets 95% traffic, v2 gets 5% -- monitor
Stage 2: v1 gets 75% traffic, v2 gets 25% -- monitor
Stage 3: v1 gets 50% traffic, v2 gets 50% -- monitor
Stage 4: v2 gets 100% traffic -- done
Pros: Catch issues before they affect all users
Cons: More complex routing and monitoring needed
For most teams starting out, a rolling deployment with smoke tests is sufficient. Move to blue-green or canary as your infrastructure and monitoring mature.
Try This Now: Ask Claude Code to create a GitHub Actions workflow for your current project. Start with just the test and build jobs. Once those work, add deployment.
Deep Dive (Optional)
16.5 Production Troubleshooting with Claude
When production breaks, speed matters. Claude Code helps you investigate faster.
Scenario: High API latency after a deploy
You: "Our API response times jumped from 50ms to 800ms after today's deploy.
Help me investigate."
Claude: Let me check what changed in today's deploy.
[Using Tool: Bash]
git log --oneline --since="8 hours ago"
3 commits today:
a3f4c2b feat: Add response caching layer
b7e8d9f fix: Update database query for user search
c1a2b3c refactor: Extract validation middleware
Let me look at the caching change first -- caching bugs are a
common cause of latency spikes.
[Using Tool: Read]
src/middleware/cache.ts
Found it. Line 23:
const data = await fetchFromDatabase(key);
await cache.set(key, data, 3600); // <-- Blocking!
return data;
The cache.set() call is awaited, meaning every cache miss
waits for Redis to confirm the write before responding.
With Redis latency of ~5ms and high traffic, this adds up.
Fix:
const data = await fetchFromDatabase(key);
cache.set(key, data, 3600); // Fire and forget
return data;
Also checking the database query change...
[Using Tool: Read]
src/queries/userSearch.ts
This query is missing an index. Line 15:
WHERE LOWER(name) LIKE '%${term}%'
This forces a full table scan. Add an index:
CREATE INDEX idx_users_name_lower ON users (LOWER(name));
Two issues found. The cache fix is the primary cause.
Want me to create a hotfix branch?
16.6 Environment Configuration
Managing multiple environments (development, staging, production) is critical for safe deployments.
You: "Set up environment configuration that works across dev, staging, and production"
Claude: I'll create a typed configuration system.
[Using Tool: Write]
File: src/config/index.ts
interface AppConfig {
api: { baseUrl: string; timeout: number };
database: { host: string; port: number; ssl: boolean };
logging: { level: 'debug' | 'info' | 'warn' | 'error' };
}
const configs: Record<string, AppConfig> = {
development: {
api: { baseUrl: 'http://localhost:3000', timeout: 10000 },
database: { host: 'localhost', port: 5432, ssl: false },
logging: { level: 'debug' },
},
staging: {
api: { baseUrl: 'https://staging.example.com', timeout: 10000 },
database: { host: process.env.DB_HOST!, port: 5432, ssl: true },
logging: { level: 'info' },
},
production: {
api: { baseUrl: 'https://api.example.com', timeout: 5000 },
database: { host: process.env.DB_HOST!, port: 5432, ssl: true },
logging: { level: 'error' },
},
};
export function getConfig(): AppConfig {
const env = process.env.NODE_ENV || 'development';
const config = configs[env];
if (!config) throw new Error(`Unknown environment: ${env}`);
return config;
}
Key principle: Never hardcode environment-specific values. Use environment variables for anything that changes between environments (database hosts, API keys, feature flags).
When Things Go Wrong
Problem: CI action fails silently – no error, no comment, just a green check
This usually means the action ran but the Claude Code step was skipped or errored without failing the job. Check these:
- Missing API key: The
ANTHROPIC_API_KEYsecret is not set or misspelled. Check Settings > Secrets. - Permission issue: The workflow lacks
pull-requests: writepermission. Add thepermissionsblock shown in the Quick Start. - Step didn’t fail the job: Add
continue-on-error: false(the default) and check that you are not accidentally usingcontinue-on-error: true.
Problem: Claude posts a wrong or irrelevant PR comment
Claude reviews code without full project context. To improve accuracy:
- Add a
CLAUDE.mdfile to your repo root with project conventions, architecture notes, and coding standards. - Narrow the review prompt – instead of “review everything,” specify what matters: “Check for SQL injection, missing error handling, and functions over 50 lines.”
- Use
fetch-depth: 0in the checkout step so Claude can see the full diff context.
Problem: Deployment script has errors in CI but works locally
Common causes:
- Missing environment variables: CI doesn’t have your
local
.env. Add them to GitHub Secrets. - Different Node/tool versions: Pin versions in your
workflow with
node-version: '20.x'rather thanlatest. - File permissions: Scripts may not be executable.
Add
chmod +x scripts/deploy.shbefore running them. - Network differences: CI runs in a datacenter. Internal services may be unreachable. Check firewall rules and VPN requirements.
Problem: Rate limiting in CI – API calls to Anthropic fail with 429 errors
Claude Code API calls in CI are subject to rate limits, especially if you have many PRs open simultaneously.
Add retry logic: Use the
retryoption in GitHub Actions or wrap your Claude step with backoff.Limit triggers: Only run the AI review on certain file types:
paths: ['src/**', 'lib/**'].Use concurrency groups: Prevent parallel runs on the same PR:
concurrency: group: claude-review-${{ github.event.pull_request.number }} cancel-in-progress: true
Chapter Checkpoint
What you learned in this chapter:
- How to set up GitHub Actions for CI/CD with automated testing and deployment
- How to use Claude Code as an automated PR reviewer in your pipeline
- Production security essentials: secrets management, input validation, rate limiting
- Deployment strategy trade-offs: rolling vs. blue-green vs. canary
- How to troubleshoot production issues using Claude Code for fast root cause analysis
Competency checklist – you should be able to:
You’ve completed Chapter 16: Production Development Workflows Next: Chapter 17 – Plugins and the Marketplace
PROMPT TO PRODUCTION