CI/CD Pipeline Prompt Templates
AI prompt templates for CI/CD pipelines. Set up automated testing, building, and deployment.
Overview
CI/CD pipelines automate the path from code commit to production deployment. These prompts help you design pipelines that catch issues early, deploy reliably, and provide fast feedback to developers. The goal is automation you can trust, every merge to main should be confidently deployable.
Best Practices
Specify your CI/CD platform (GitHub Actions, GitLab CI, Jenkins, etc.), syntax differs significantly
Describe your deployment targets (AWS, Kubernetes, serverless, etc.) for relevant deployment steps
Include your testing strategy (unit, integration, e2e) so the pipeline runs appropriate tests
Mention any secrets or credentials needed so they're properly handled
Describe your branching strategy to align pipeline triggers correctly
Prompt Templates
1. CI/CD Pipeline Design
Design a CI/CD pipeline for this project: **Project type:** [WEB APP / API / LIBRARY / MOBILE] **Tech stack:** [LANGUAGES, FRAMEWORKS] **CI/CD platform:** [GITHUB ACTIONS / GITLAB CI / JENKINS / CIRCLE CI] **Branching strategy:** - Main branch: [DEPLOYMENT TARGET] - Feature branches: [WORKFLOW] - Release process: [HOW RELEASES WORK] **Testing requirements:** - [TEST TYPE 1 - e.g. unit tests] - [TEST TYPE 2 - e.g. integration tests] - [TEST TYPE 3 - e.g. e2e tests] **Deployment targets:** - Staging: [WHERE/HOW] - Production: [WHERE/HOW] **Additional requirements:** - [REQUIREMENT 1 - e.g. security scanning] - [REQUIREMENT 2 - e.g. performance tests] **Provide:** 1. Pipeline configuration file 2. Stage/job breakdown 3. Caching strategy 4. Secret management approach 5. Failure handling and notifications
Project: Node.js REST API Stack: TypeScript, Express, PostgreSQL Platform: GitHub Actions Branching: GitHub Flow (main = production, feature branches for development) Testing: Jest unit tests, Supertest integration tests, no e2e Deployment: AWS ECS (staging auto-deploy, prod manual approval) Additional: npm audit for security, Docker build
## CI/CD Pipeline Design
### Pipeline Overview
```
Push to branch → Build → Test → Security Scan
↓
Push to main → Build → Test → Security Scan → Deploy Staging
↓
Manual approval → Deploy Production
```
### GitHub Actions Configuration
```yaml
#.github/workflows/ci-cd.yml
name: CI/CD Pipeline
on:
push:
branches: [main, 'feature/**']
pull_request:
branches: [main]
env:
NODE_VERSION: '20'
AWS_REGION: us-east-1
ECR_REPOSITORY: my-api
ECS_CLUSTER: my-cluster
ECS_SERVICE_STAGING: api-staging
ECS_SERVICE_PROD: api-production
jobs:
# ==================== BUILD ====================
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Build
run: npm run build
- name: Upload build artifacts
uses: actions/upload-artifact@v4
with:
name: dist
path: dist/
retention-days: 1
# ==================== TEST ====================
test-unit:
runs-on: ubuntu-latest
needs: build
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- name: Run unit tests
run: npm run test:unit -- --coverage
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: coverage/lcov.info
test-integration:
runs-on: ubuntu-latest
needs: build
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: test
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'npm'
- run: npm ci
- name: Run database migrations
run: npm run db:migrate
env:
DATABASE_URL: postgres://postgres:postgres@localhost:5432/test
- name: Run integration tests
run: npm run test:integration
env:
DATABASE_URL: postgres://postgres:postgres@localhost:5432/test
# ==================== SECURITY ====================
security:
runs-on: ubuntu-latest
needs: build
steps:
- uses: actions/checkout@v4
- name: Run npm audit
run: npm audit --audit-level=high
- name: Run Snyk security scan
uses: snyk/actions/node@master
continue-on-error: true # Don't block on warnings
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
# ==================== DOCKER BUILD ====================
docker:
runs-on: ubuntu-latest
needs: [test-unit, test-integration, security]
if: github.ref == 'refs/heads/main'
outputs:
image-tag: ${{ steps.meta.outputs.tags }}
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build and push Docker image
id: meta
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG.
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "tags=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
# ==================== DEPLOY STAGING ====================
deploy-staging:
runs-on: ubuntu-latest
needs: docker
environment: staging
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Deploy to ECS Staging
run: |
aws ecs update-service \
--cluster ${{ env.ECS_CLUSTER }} \
--service ${{ env.ECS_SERVICE_STAGING }} \
--force-new-deployment
- name: Wait for deployment
run: |
aws ecs wait services-stable \
--cluster ${{ env.ECS_CLUSTER }} \
--services ${{ env.ECS_SERVICE_STAGING }}
# ==================== DEPLOY PRODUCTION ====================
deploy-production:
runs-on: ubuntu-latest
needs: deploy-staging
environment: production # Requires manual approval
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Deploy to ECS Production
run: |
aws ecs update-service \
--cluster ${{ env.ECS_CLUSTER }} \
--service ${{ env.ECS_SERVICE_PROD }} \
--force-new-deployment
- name: Wait for deployment
run: |
aws ecs wait services-stable \
--cluster ${{ env.ECS_CLUSTER }} \
--services ${{ env.ECS_SERVICE_PROD }}
- name: Notify Slack
uses: slackapi/slack-github-action@v1
with:
payload: |
{
"text": "Deployed to production: ${{ github.sha }}"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK }}
```
### Secrets Required
| Secret | Purpose |
|--------|--------|
| AWS_ACCESS_KEY_ID | AWS deployment |
| AWS_SECRET_ACCESS_KEY | AWS deployment |
| SNYK_TOKEN | Security scanning |
| SLACK_WEBHOOK | Deployment notifications |
### Environments Setup
In GitHub repo settings → Environments:
1. **staging**: No protection rules (auto-deploy)
2. **production**: Required reviewers (1 approval needed)When setting up CI/CD for new projects, improving existing pipelines, or migrating between CI/CD platforms
- •Run tests in parallel where possible, unit and integration tests don't depend on each other
- •Use GitHub Environments with protection rules for production deployment approval
2. Pipeline Optimization
Improve this CI/CD pipeline for speed: ```yaml [PASTE CURRENT PIPELINE CONFIG] ``` **Current pain points:** - Total time: [CURRENT DURATION] - Slowest stages: [WHICH STAGES] **Constraints:** - [WHAT CAN'T CHANGE] - [BUDGET LIMITS IF ANY] **Improve for:** 1. Faster feedback on PRs 2. Efficient resource usage 3. Better caching 4. Parallelization opportunities
Platform: GitHub Actions Current time: 25 minutes Slowest: npm install (4 min), docker build (8 min), e2e tests (10 min) Constraints: Can't skip tests, need docker image Current: All stages run sequentially
## Pipeline Optimization Analysis
### Current Bottlenecks
| Stage | Time | Issue |
|-------|------|-------|
| npm install | 4 min | No caching |
| Docker build | 8 min | No layer caching |
| E2E tests | 10 min | Sequential, single runner |
| **Total** | **25 min** | Sequential execution |
### Improved Pipeline
**Target: 25 min → 10 min**
```yaml
name: Improved CI/CD
on:
push:
branches: [main]
pull_request:
jobs:
# Cache dependencies once, reuse everywhere
setup:
runs-on: ubuntu-latest
outputs:
cache-key: ${{ steps.cache.outputs.cache-hit }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Cache node_modules
id: cache
uses: actions/cache@v4
with:
path: node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
- if: steps.cache.outputs.cache-hit != 'true'
run: npm ci
# Run tests in parallel
test-unit:
needs: setup
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- uses: actions/cache@v4
with:
path: node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
- run: npm run test:unit
test-integration:
needs: setup
runs-on: ubuntu-latest
#... (parallel with unit tests)
# Split E2E tests across multiple runners
test-e2e:
needs: setup
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
shard: [1, 2, 3, 4] # 4 parallel runners
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- uses: actions/cache@v4
with:
path: node_modules
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
- name: Run E2E shard ${{ matrix.shard }}
run: npm run test:e2e -- --shard=${{ matrix.shard }}/4
# Docker with BuildKit caching
docker:
needs: [test-unit, test-integration, test-e2e]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to ECR
uses: aws-actions/amazon-ecr-login@v2
- name: Build and push
uses: docker/build-push-action@v5
with:
context:.
push: true
tags: ${{ env.ECR_REGISTRY }}/${{ env.REPO }}:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
```
### Optimization Summary
| Technique | Before | After | Savings |
|-----------|--------|-------|--------|
| npm cache | 4 min | 30 sec | 3.5 min |
| Parallel tests | 14 min | 4 min | 10 min |
| E2E sharding (4x) | 10 min | 3 min | 7 min |
| Docker BuildKit cache | 8 min | 2 min | 6 min |
| **Total** | **25 min** | **~8 min** | **17 min** |
### Dockerfile Optimization
```dockerfile
# Multi-stage with better layer caching
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json./
RUN npm ci --only=production
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json./
RUN npm ci
COPY..
RUN npm run build
FROM node:20-alpine AS runtime
WORKDIR /app
COPY --from=deps /app/node_modules./node_modules
COPY --from=build /app/dist./dist
CMD ["node", "dist/index.js"]
```When CI/CD pipelines are too slow, blocking developer productivity, or costing too much in compute time
- •E2E test sharding is often the biggest win, 10 min of tests across 4 runners = 2.5 min
- •Docker BuildKit's GitHub Actions cache can significantly speed up image builds
3. Deployment Strategy
Design a deployment strategy for this application: **Application:** [DESCRIPTION] **Infrastructure:** [WHERE IT RUNS] **Current state:** [HOW DEPLOYS WORK NOW] **Requirements:** - Downtime tolerance: [ZERO / BRIEF / MAINTENANCE WINDOW OK] - Rollback speed: [HOW FAST YOU NEED TO ROLLBACK] - Traffic patterns: [STEADY / SPIKY / REGIONAL] **Risk factors:** - [RISK 1 - e.g. database migrations] - [RISK 2 - e.g. external dependencies] **Provide:** 1. Deployment strategy recommendation 2. Implementation steps 3. Rollback procedure 4. Monitoring and validation
Application: E-commerce API handling payments Infrastructure: Kubernetes on AWS EKS Current: Replace all pods at once (causes brief errors) Downtime tolerance: Zero (payments must not fail) Rollback speed: Under 2 minutes Risk factors: Database schema changes, Stripe webhook compatibility
## Deployment Strategy: Blue-Green with Canary
### Why This Strategy
- **Zero downtime**: New version runs alongside old
- **Fast rollback**: Instant switch back to old version
- **Payment safety**: Test with real traffic before full cutover
### Architecture
```
┌─────────────────────┐
│ Ingress/ALB │
└──────────┬──────────┘
│
┌────────────────┼────────────────┐
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Blue │ │ Green │ │ Canary │
│ (stable) │ │ (new) │ │ (test) │
│ 90% │ │ 0% │ │ 10% │
└──────────┘ └──────────┘ └──────────┘
```
### Kubernetes Implementation
```yaml
# deployment-blue.yaml (current stable)
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-blue
labels:
app: api
version: blue
spec:
replicas: 5
selector:
matchLabels:
app: api
version: blue
template:
metadata:
labels:
app: api
version: blue
spec:
containers:
- name: api
image: myregistry/api:v1.2.3 # Current version
ports:
- containerPort: 3000
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
resources:
requests:
memory: "256Mi"
cpu: "200m"
---
# deployment-green.yaml (new version)
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-green
labels:
app: api
version: green
spec:
replicas: 5
selector:
matchLabels:
app: api
version: green
template:
metadata:
labels:
app: api
version: green
spec:
containers:
- name: api
image: myregistry/api:v1.2.4 # New version
#... same config as blue
---
# service with traffic split
apiVersion: v1
kind: Service
metadata:
name: api
spec:
selector:
app: api
# No version selector - both receive traffic
ports:
- port: 80
targetPort: 3000
---
# Istio VirtualService for traffic control
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: api
spec:
hosts:
- api
http:
- route:
- destination:
host: api
subset: blue
weight: 90
- destination:
host: api
subset: green
weight: 10 # Canary traffic
```
### Deployment Procedure
**Phase 1: Deploy Green (0% traffic)**
```bash
# Deploy new version, not receiving traffic yet
kubectl apply -f deployment-green.yaml
# Wait for all pods ready
kubectl rollout status deployment/api-green
# Verify health
kubectl exec -it deploy/api-green -- curl localhost:3000/health
```
**Phase 2: Canary (10% traffic)**
```bash
# Route 10% of traffic to green
kubectl apply -f virtualservice-canary.yaml
# Monitor for 15 minutes
# - Error rate: Should stay < 0.1%
# - Latency p99: Should stay < 200ms
# - Payment success rate: Should stay > 99.5%
```
**Phase 3: Gradual Rollout**
```bash
# If canary healthy: 10% → 50% → 100%
kubectl patch virtualservice api --type=merge -p '
{"spec":{"http":[{"route":[
{"destination":{"host":"api","subset":"blue"},"weight":50},
{"destination":{"host":"api","subset":"green"},"weight":50}
]}]}}'
# Monitor, then...
kubectl patch virtualservice api --type=merge -p '
{"spec":{"http":[{"route":[
{"destination":{"host":"api","subset":"green"},"weight":100}
]}]}}'
```
**Phase 4: Cleanup**
```bash
# After 24 hours of stability, remove blue
kubectl delete deployment api-blue
# Rename green to blue for next deployment
kubectl label deployment api-green version=blue --overwrite
```
### Rollback Procedure
**Instant rollback (< 30 seconds):**
```bash
# Switch all traffic back to blue
kubectl patch virtualservice api --type=merge -p '
{"spec":{"http":[{"route":[
{"destination":{"host":"api","subset":"blue"},"weight":100}
]}]}}'
```
### Database Migration Safety
```
1. Migrations must be backward compatible
2. Deploy order:
a. Run migration (additive only: new columns, tables)
b. Deploy new code that uses new schema
c. After stable, clean up old columns (separate deploy)
Never: Drop columns, rename columns, or change types in same deploy as code
```When planning deployment strategies for production systems, especially those with zero-downtime requirements
- •Canary deployments catch issues that don't appear in staging, real traffic reveals real problems
- •Database migrations and code deploys should be separate, backward-compatible steps
4. Pipeline Troubleshooting
Help me troubleshoot this CI/CD pipeline failure: **Pipeline config:** ```yaml [RELEVANT PIPELINE CONFIG] ``` **Error output:** ``` [PASTE ERROR LOGS] ``` **Context:** - When it happens: [ALWAYS / INTERMITTENT / SPECIFIC CONDITIONS] - Recent changes: [WHAT CHANGED BEFORE IT STARTED FAILING] - Works locally: [YES / NO] **Diagnose:** 1. Root cause of the failure 2. Fix for the immediate issue 3. How to prevent this in the future
Platform: GitHub Actions Error: npm install fails with EACCES permission denied When: Started failing after adding a postinstall script Works locally: Yes Recent changes: Added husky for git hooks in postinstall
## Pipeline Failure Analysis
### Root Cause
The `EACCES` error is caused by Husky's postinstall script trying to run `git config` and modify `.git/hooks`, which fails because:
1. GitHub Actions runs `npm ci` as root
2. Husky tries to set up git hooks
3. Git hooks setup requires git config access that's restricted
### Error Breakdown
```
npm ERR! code EACCES
npm ERR! syscall mkdir
npm ERR! path /github/workspace/.git/hooks
npm ERR! errno -13
```
Husky's postinstall runs `husky install` which tries to create `.git/hooks` directory with restricted permissions.
### Immediate Fix
**Option 1: Skip Husky in CI (Recommended)**
```yaml
#.github/workflows/ci.yml
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
run: npm ci
env:
HUSKY: 0 # Disables husky postinstall
```
**Option 2: Update package.json**
```json
{
"scripts": {
"postinstall": "is-ci || husky install"
},
"devDependencies": {
"is-ci": "^3.0.1"
}
}
```
**Option 3: Update Husky config (Husky v9+)**
```json
// package.json
{
"scripts": {
"prepare": "husky || true"
}
}
```
The `|| true` ensures the script doesn't fail if husky can't initialize.
### Prevention
1. **Test CI locally** before pushing:
```bash
act push # Run GitHub Actions locally with 'act'
```
2. **Add CI check to postinstall scripts**:
```json
{
"scripts": {
"postinstall": "node -e \"process.env.CI || require('husky').install()\""
}
}
```
3. **Document CI-specific behavior**:
```markdown
## CI Notes
- Husky is disabled in CI (HUSKY=0)
- Git hooks run locally only, lint-staged enforced by CI jobs
```
### Why This Works Locally But Not in CI
| Environment | User | Git hooks needed? |
|-------------|------|-------------------|
| Local | Your user | Yes (pre-commit checks) |
| CI | root | No (separate lint job) |
Git hooks are for local development workflow. CI should run the same checks via dedicated pipeline jobs, not git hooks.When pipelines fail unexpectedly, especially after working previously or when errors are unclear
- •Most CI failures after 'nothing changed' are dependency-related, check for unpinned versions
- •Use 'act' to run GitHub Actions locally for faster debugging
Common Mistakes to Avoid
Running all stages sequentially when many can run in parallel
Not caching dependencies, rebuilding node_modules every time wastes minutes
No rollback plan, you need to undo bad deploys faster than fixing forward
Frequently Asked Questions
CI/CD pipelines automate the path from code commit to production deployment. These prompts help you design pipelines that catch issues early, deploy reliably, and provide fast feedback to developers. The goal is automation you can trust, every merge to main should be confidently deployable.
Related Templates
Code Review Prompt Templates
AI prompt templates for thorough code reviews. Get comprehensive feedback on code quality, security, and best practices.
Debugging Prompt Templates
AI prompt templates for debugging code. Identify issues, understand errors, and find solutions faster.
Code Documentation Prompt Templates
AI prompt templates for writing code documentation. Create clear comments, READMEs, and API docs.
Have your own prompt to optimize?