Docker & Kubernetes Prompt Templates
AI prompt templates for Docker and Kubernetes. Containerize applications and manage orchestration.
Overview
Containers package applications with their dependencies for consistent deployment anywhere. These prompts help you create efficient Docker images, configure Kubernetes deployments, and troubleshoot container issues. The goal is containers that are secure, efficient, and production-ready.
Best Practices
Specify your application's runtime requirements (language, dependencies, ports) for accurate Dockerfiles
Mention your deployment environment (local, cloud, Kubernetes) for appropriate configurations
Include security requirements, containers need to be hardened for production
Describe your scaling needs so resource configurations are appropriate
Share existing configurations when troubleshooting for accurate diagnosis
Prompt Templates
1. Dockerfile Generator
Create an improved Dockerfile for this application: **Application:** - Language/Runtime: [LANGUAGE AND VERSION] - Framework: [FRAMEWORK IF ANY] - Build command: [HOW TO BUILD] - Start command: [HOW TO RUN] **Dependencies:** - [BUILD-TIME DEPENDENCIES] - [RUNTIME DEPENDENCIES] **Requirements:** - Base image preference: [ALPINE / DEBIAN / DISTROLESS / etc.] - Multi-stage build: [YES / NO] - Final image size target: [IF ANY] - Security: [ROOT / NON-ROOT USER] **Files to include:** - [SOURCE CODE PATHS] - [CONFIG FILES] **Provide:** 1. Improved Dockerfile with comments 2..dockerignore file 3. Build and run commands 4. Image size estimate
Application: Node.js/TypeScript API Runtime: Node.js 20 Framework: Express Build: npm run build (compiles TypeScript) Start: node dist/index.js Dependencies: npm packages (some have native modules) Requirements: Alpine base, multi-stage, non-root user, <200MB final image
## Improved Dockerfile
```dockerfile
# ==================== BUILD STAGE ====================
FROM node:20-alpine AS builder
# Install build dependencies for native modules
RUN apk add --no-cache python3 make g++
WORKDIR /app
# Copy package files first (better layer caching)
COPY package*.json./
# Install ALL dependencies (including devDependencies for build)
RUN npm ci
# Copy source code
COPY tsconfig.json./
COPY src./src
# Build TypeScript
RUN npm run build
# Prune devDependencies
RUN npm prune --production
# ==================== PRODUCTION STAGE ====================
FROM node:20-alpine AS production
# Security: Run as non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nodejs -u 1001 -G nodejs
WORKDIR /app
# Copy only production artifacts
COPY --from=builder --chown=nodejs:nodejs /app/node_modules./node_modules
COPY --from=builder --chown=nodejs:nodejs /app/dist./dist
COPY --from=builder --chown=nodejs:nodejs /app/package.json./
# Security: Remove unnecessary files
RUN rm -rf /app/node_modules/*/.git \
&& rm -rf /app/node_modules/*/test \
&& rm -rf /app/node_modules/*/docs
# Switch to non-root user
USER nodejs
# Expose port (documentation)
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
# Start application
CMD ["node", "dist/index.js"]
```
##.dockerignore
```
# Dependencies
node_modules
# Build outputs
dist
coverage
# Development files
.git
.gitignore
.env*
!.env.example
*.md
Dockerfile*
docker-compose*
.dockerignore
# IDE
.vscode
.idea
*.swp
# Tests
__tests__
*.test.ts
*.spec.ts
jest.config.*
# Logs
*.log
logs
```
## Build & Run Commands
```bash
# Build image
docker build -t my-api:latest.
# Build with specific tag
docker build -t my-api:v1.2.3.
# Run container
docker run -d \
--name my-api \
-p 3000:3000 \
-e NODE_ENV=production \
-e DATABASE_URL=postgres://... \
my-api:latest
# Run with resource limits (production)
docker run -d \
--name my-api \
-p 3000:3000 \
--memory=512m \
--cpus=0.5 \
my-api:latest
```
## Image Size Breakdown
| Stage | Size | Contents |
|-------|------|----------|
| Builder | ~400MB | Full node_modules, TypeScript, build tools |
| Production | ~150MB | Alpine + Node + production deps + dist |
**Final image: ~150MB** (meets <200MB target)When containerizing applications, improving existing Dockerfiles, or setting up new projects for container deployment
- •Multi-stage builds can reduce image size by 70%+ by excluding build tools from final image
- •Copy package.json before source code, changes to code won't invalidate npm install cache
2. Kubernetes Deployment
Create Kubernetes manifests for this application: **Application:** - Name: [APP NAME] - Image: [DOCKER IMAGE] - Port: [CONTAINER PORT] **Requirements:** - Replicas: [NUMBER] - Resources: [CPU/MEMORY REQUESTS AND LIMITS] - Environment: [ENV VARS OR CONFIG MAPS] - Secrets: [WHAT SECRETS ARE NEEDED] **Exposure:** - Internal only: [YES / NO] - External access: [INGRESS / LOAD BALANCER / NODE PORT] - Domain: [IF EXTERNAL] **Health checks:** - Readiness: [ENDPOINT / PORT] - Liveness: [ENDPOINT / PORT] **Additional:** - [PERSISTENT STORAGE NEEDS] - [AUTOSCALING REQUIREMENTS] - [ANY OTHER REQUIREMENTS]
Application: Web API Image: myregistry/api:v1.0.0 Port: 3000 Replicas: 3 (autoscale 3-10) Resources: 256Mi memory, 200m CPU Env: NODE_ENV=production, LOG_LEVEL=info Secrets: DATABASE_URL, API_KEY Exposure: External via Ingress, api.example.com Health: /health for both Additional: HPA based on CPU
## Kubernetes Manifests ### Namespace ```yaml # namespace.yaml apiVersion: v1 kind: Namespace metadata: name: api labels: app.kubernetes.io/name: api ``` ### Secret ```yaml # secret.yaml apiVersion: v1 kind: Secret metadata: name: api-secrets namespace: api type: Opaque stringData: DATABASE_URL: "postgresql://user:pass@host:5432/db" # Replace or use sealed-secrets API_KEY: "your-api-key" # Replace or use external secrets ``` ### ConfigMap ```yaml # configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: api-config namespace: api data: NODE_ENV: "production" LOG_LEVEL: "info" ``` ### Deployment ```yaml # deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: api namespace: api labels: app.kubernetes.io/name: api app.kubernetes.io/version: "1.0.0" spec: replicas: 3 selector: matchLabels: app.kubernetes.io/name: api template: metadata: labels: app.kubernetes.io/name: api annotations: prometheus.io/scrape: "true" prometheus.io/port: "3000" spec: # Security context securityContext: runAsNonRoot: true runAsUser: 1001 fsGroup: 1001 containers: - name: api image: myregistry/api:v1.0.0 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 3000 protocol: TCP # Environment from ConfigMap and Secret envFrom: - configMapRef: name: api-config - secretRef: name: api-secrets # Resource limits resources: requests: memory: "256Mi" cpu: "200m" limits: memory: "512Mi" cpu: "500m" # Probes readinessProbe: httpGet: path: /health port: http initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 3 failureThreshold: 3 livenessProbe: httpGet: path: /health port: http initialDelaySeconds: 15 periodSeconds: 20 timeoutSeconds: 3 failureThreshold: 3 # Security context for container securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true capabilities: drop: - ALL # Graceful shutdown terminationGracePeriodSeconds: 30 ``` ### Service ```yaml # service.yaml apiVersion: v1 kind: Service metadata: name: api namespace: api labels: app.kubernetes.io/name: api spec: type: ClusterIP ports: - port: 80 targetPort: http protocol: TCP name: http selector: app.kubernetes.io/name: api ``` ### Ingress ```yaml # ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: api namespace: api annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: tls: - hosts: - api.example.com secretName: api-tls rules: - host: api.example.com http: paths: - path: / pathType: Prefix backend: service: name: api port: number: 80 ``` ### HorizontalPodAutoscaler ```yaml # hpa.yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: api namespace: api spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: api minReplicas: 3 maxReplicas: 10 metrics: - type: Resource resource: name: cpu target: type: Use averageUtilization: 70 behavior: scaleDown: stabilizationWindowSeconds: 300 # Wait 5 min before scaling down policies: - type: Percent value: 50 periodSeconds: 60 scaleUp: stabilizationWindowSeconds: 0 # Scale up immediately policies: - type: Percent value: 100 periodSeconds: 15 ``` ### Apply Order ```bash kubectl apply -f namespace.yaml kubectl apply -f secret.yaml kubectl apply -f configmap.yaml kubectl apply -f deployment.yaml kubectl apply -f service.yaml kubectl apply -f ingress.yaml kubectl apply -f hpa.yaml ```
When deploying applications to Kubernetes, setting up new services, or creating deployment templates
- •Always set resource requests AND limits, without them, a pod can consume all node resources
- •HPA stabilizationWindowSeconds prevents thrashing during traffic spikes
3. Docker Compose Setup
Create a docker-compose configuration for local development: **Services needed:** - [SERVICE 1 - e.g. API application] - [SERVICE 2 - e.g. PostgreSQL database] - [SERVICE 3 - e.g. Redis cache] **Requirements:** - [HOT RELOAD FOR DEVELOPMENT] - [PERSISTENT DATA] - [NETWORK BETWEEN SERVICES] **Environment:** - [ENVIRONMENT VARIABLES NEEDED] **Provide:** 1. docker-compose.yml 2. Service-specific Dockerfiles if needed 3..env.example file 4. Usage instructions
Services: Node.js API, PostgreSQL, Redis, MinIO (S3-compatible storage) Requirements: Hot reload for API, persistent DB data, all services on same network Environment: Database credentials, Redis URL, S3 credentials
## Docker Compose Development Setup ### docker-compose.yml ```yaml version: '3.8' services: # ==================== API ==================== api: build: context:. dockerfile: Dockerfile.dev ports: - "3000:3000" - "9229:9229" # Node.js debugger volumes: -.:/app - /app/node_modules # Prevent overwriting node_modules environment: - NODE_ENV=development - DATABASE_URL=postgresql://postgres:postgres@db:5432/myapp - REDIS_URL=redis://redis:6379 - S3_ENDPOINT=http://minio:9000 - S3_ACCESS_KEY=minioadmin - S3_SECRET_KEY=minioadmin - S3_BUCKET=uploads depends_on: db: condition: service_healthy redis: condition: service_started minio: condition: service_started command: npm run dev networks: - app-network # ==================== DATABASE ==================== db: image: postgres:15-alpine ports: - "5432:5432" environment: - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres - POSTGRES_DB=myapp volumes: - postgres_data:/var/lib/postgresql/data -./scripts/init-db.sql:/docker-entrypoint-initdb.d/init.sql:ro healthcheck: test: ["CMD-SHELL", "pg_isready -U postgres"] interval: 5s timeout: 5s retries: 5 networks: - app-network # ==================== REDIS ==================== redis: image: redis:7-alpine ports: - "6379:6379" volumes: - redis_data:/data command: redis-server --appendonly yes networks: - app-network # ==================== MINIO (S3) ==================== minio: image: minio/minio:latest ports: - "9000:9000" # API - "9001:9001" # Console environment: - MINIO_ROOT_USER=minioadmin - MINIO_ROOT_PASSWORD=minioadmin volumes: - minio_data:/data command: server /data --console-address ":9001" networks: - app-network # ==================== MINIO SETUP ==================== minio-setup: image: minio/mc:latest depends_on: - minio entrypoint: > /bin/sh -c " sleep 5; mc alias set local http://minio:9000 minioadmin minioadmin; mc mb local/uploads --ignore-existing; mc anonymous set public local/uploads; exit 0; " networks: - app-network volumes: postgres_data: redis_data: minio_data: networks: app-network: driver: bridge ``` ### Dockerfile.dev (for hot reload) ```dockerfile FROM node:20-alpine WORKDIR /app # Install dependencies for native modules RUN apk add --no-cache python3 make g++ # Install dependencies first (better caching) COPY package*.json./ RUN npm install # Source code mounted as volume, not copied # This enables hot reload EXPOSE 3000 9229 # Use nodemon for hot reload CMD ["npm", "run", "dev"] ``` ###.env.example ```bash # Database DATABASE_URL=postgresql://postgres:postgres@localhost:5432/myapp # Redis REDIS_URL=redis://localhost:6379 # MinIO (S3-compatible) S3_ENDPOINT=http://localhost:9000 S3_ACCESS_KEY=minioadmin S3_SECRET_KEY=minioadmin S3_BUCKET=uploads # Application NODE_ENV=development PORT=3000 ``` ### Usage ```bash # Start all services docker-compose up -d # View logs docker-compose logs -f api # Stop all services docker-compose down # Stop and remove volumes (clean slate) docker-compose down -v # Rebuild API after package.json changes docker-compose up -d --build api # Run database migrations docker-compose exec api npm run db:migrate # Access database CLI docker-compose exec db psql -U postgres -d myapp # Access Redis CLI docker-compose exec redis redis-cli ``` ### Access URLs | Service | URL | |---------|-----| | API | http://localhost:3000 | | PostgreSQL | localhost:5432 | | Redis | localhost:6379 | | MinIO Console | http://localhost:9001 | | MinIO API | http://localhost:9000 |
When setting up local development environments, onboarding new developers, or creating reproducible dev setups
- •Use healthchecks and depends_on conditions to ensure services start in order
- •Mount source code as volume but exclude node_modules to avoid platform issues
4. Container Troubleshooting
Help me troubleshoot this container issue: **Problem:** [DESCRIBE THE ISSUE] **Container/Pod info:** ``` [PASTE docker ps / kubectl get pods OUTPUT] ``` **Logs:** ``` [PASTE docker logs / kubectl logs OUTPUT] ``` **Configuration:** ```yaml [PASTE RELEVANT CONFIG] ``` **What I've tried:** - [ATTEMPT 1] - [ATTEMPT 2] **Diagnose:** 1. Root cause 2. Fix 3. Prevention
Problem: Pod keeps crashing with CrashLoopBackOff after 2-3 minutes Pod info: kubectl get pods shows 4 restarts Logs: Last line is 'Killed' with no error message Config: Deployment with 256Mi memory limit Tried: Checked app logs, no errors before crash
## Container Troubleshooting: CrashLoopBackOff
### Root Cause Analysis
The symptom "Killed" with no error message + crash after 2-3 minutes of running strongly indicates **OOMKilled** (Out of Memory).
**Verification:**
```bash
# Check if OOMKilled
kubectl describe pod <pod-name> | grep -A5 "Last State"
# Expected output:
# Last State: Terminated
# Reason: OOMKilled
# Exit Code: 137
```
```bash
# Check current memory usage (if pod is running)
kubectl top pod <pod-name>
# Check events
kubectl get events --field-selector involvedObject.name=<pod-name>
```
### The Problem
Your 256Mi memory limit is being exceeded. The container starts, runs for a bit as memory increases, hits the limit, and Linux OOM killer terminates it.
**Exit code 137** = 128 + 9 (SIGKILL) = OOMKilled
### Immediate Fix
**Option 1: Increase memory limit**
```yaml
resources:
requests:
memory: "256Mi" # What the scheduler uses
cpu: "100m"
limits:
memory: "512Mi" # Increase this
cpu: "500m"
```
**Option 2: If you can't increase limits, reduce app memory**
```javascript
// For Node.js: Limit heap size
// In Dockerfile or command:
CMD ["node", "--max-old-space-size=200", "dist/index.js"]
```
```java
// For Java: Limit heap
// In Dockerfile:
ENV JAVA_OPTS="-Xmx200m -Xms200m"
CMD ["java", "$JAVA_OPTS", "-jar", "app.jar"]
```
### Finding the Right Memory Limit
```bash
# Run locally and monitor
docker run -d --name test myimage
docker stats test
# Watch memory over time
watch -n 1 'docker stats test --no-stream'
```
**Set limits to:**
- Request: Average memory usage
- Limit: Peak memory usage + 20% buffer
### Prevention
1. **Add memory monitoring** to your application:
```javascript
// Log memory usage periodically
setInterval(() => {
const used = process.memoryUsage();
console.log('Memory:', Math.round(used.heapUsed / 1024 / 1024), 'MB');
}, 60000);
```
2. **Set up alerts** in your monitoring:
```yaml
# Prometheus alert
- alert: ContainerMemoryHigh
expr: container_memory_usage_bytes / container_spec_memory_limit_bytes > 0.8
for: 5m
labels:
severity: warning
```
3. **Load test** before deploying:
```bash
# Find memory ceiling
k6 run --vus 50 --duration 5m load-test.js
```
### Other Common CrashLoopBackOff Causes
| Symptom | Likely Cause | Check |
|---------|-------------|-------|
| Exit 137 | OOMKilled | `describe pod`, increase memory |
| Exit 1 | App error | `logs`, fix app bug |
| Exit 0 repeatedly | App exits immediately | `logs`, check CMD |
| No logs | Container can't start | `describe pod`, check image |
| Probe failure | App too slow to start | Increase `initialDelaySeconds` |When containers crash, won't start, or behave unexpectedly in Docker or Kubernetes
- •Exit code 137 (OOMKilled) and 143 (SIGTERM) are the most common, know what they mean
- •'kubectl describe pod' often has more info than 'kubectl logs' for startup failures
Common Mistakes to Avoid
Running containers as root, always use a non-root user in production
Not setting resource limits, a runaway container can crash the entire node
Copying all files into Docker image instead of using.dockerignore, bloated images and leaked secrets
Frequently Asked Questions
Containers package applications with their dependencies for consistent deployment anywhere. These prompts help you create efficient Docker images, configure Kubernetes deployments, and troubleshoot container issues. The goal is containers that are secure, efficient, and production-ready.
Related Templates
Code Review Prompt Templates
AI prompt templates for thorough code reviews. Get comprehensive feedback on code quality, security, and best practices.
Debugging Prompt Templates
AI prompt templates for debugging code. Identify issues, understand errors, and find solutions faster.
Code Documentation Prompt Templates
AI prompt templates for writing code documentation. Create clear comments, READMEs, and API docs.
Have your own prompt to optimize?