Docker Compose vs Kubernetes: What Small Teams Actually Need in 2026
The honest answer: most small teams don't need Kubernetes. Most large teams don't need the complexity they've added to Kubernetes. And Docker Compose breaks down in ways that are predictable and well-documented.
Here's the real decision framework, followed by production-grade configs for both.
When Docker Compose Is the Right Answer
Docker Compose is the right answer when:
- Single VPS or small cluster. You're deploying to 1-3 servers and aren't expecting to scale horizontally
- Team of 1-5 engineers. Kubernetes operational overhead (cluster management, YAML verbosity, networking concepts) is expensive for small teams
- Startup / MVP phase. Ship fast, optimize later. You can migrate to k8s when you have the traffic that justifies it
- Budget matters. Managed Kubernetes (EKS, GKE, AKS) adds $70-150/month minimum before your first pod. A $20 VPS runs everything with Compose
When you'll outgrow Compose:
- You need zero-downtime rolling deploys without a load balancer you manage yourself
- You need horizontal auto-scaling based on CPU/memory metrics
- You run multiple services across multiple nodes that need service discovery
- You have multiple teams who need isolated namespaces and RBAC
Production Docker Compose Setup
Here's a real production Compose configuration for a Node.js + PostgreSQL + Redis stack:
# docker-compose.yml
version: '3.9'
services:
# Node.js API
api:
image: ghcr.io/${GITHUB_REPOSITORY}/api:${IMAGE_TAG:-latest}
restart: unless-stopped
env_file: .env.production
environment:
NODE_ENV: production
PORT: 3000
DATABASE_URL: postgresql://app:${POSTGRES_PASSWORD}@postgres:5432/appdb
REDIS_URL: redis://redis:6379
ports:
- "127.0.0.1:3000:3000" # Bind to localhost only — nginx handles public traffic
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Next.js frontend
frontend:
image: ghcr.io/${GITHUB_REPOSITORY}/frontend:${IMAGE_TAG:-latest}
restart: unless-stopped
environment:
NODE_ENV: production
ports:
- "127.0.0.1:3001:3000"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/"]
interval: 30s
timeout: 10s
retries: 3
# PostgreSQL
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: appdb
volumes:
- postgres_data:/var/lib/postgresql/data
- ./postgres/init:/docker-entrypoint-initdb.d # init SQL scripts
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d appdb"]
interval: 10s
timeout: 5s
retries: 5
# No ports exposed — only accessible within Docker network
# Redis
redis:
image: redis:7-alpine
restart: unless-stopped
command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
# Nginx reverse proxy
nginx:
image: nginx:alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- /etc/letsencrypt:/etc/letsencrypt:ro
- nginx_logs:/var/log/nginx
depends_on:
- api
- frontend
volumes:
postgres_data:
redis_data:
nginx_logs:
networks:
default:
name: app_network
Nginx config for the API:
# nginx/conf.d/api.conf
upstream api {
server api:3000;
keepalive 32;
}
upstream frontend {
server frontend:3000;
keepalive 16;
}
server {
listen 443 ssl http2;
server_name api.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/api.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.yourdomain.com/privkey.pem;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
# Rate limiting
limit_req_zone $binary_remote_addr zone=api:10m rate=100r/m;
limit_req zone=api burst=20 nodelay;
location / {
proxy_pass http://api;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
server {
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
Zero-downtime deploy with Compose using a shell script:
#!/bin/bash
# deploy.sh — zero-downtime rolling deploy with Docker Compose
set -e
IMAGE_TAG=$1
if [ -z "$IMAGE_TAG" ]; then
echo "Usage: ./deploy.sh <image-tag>"
exit 1
fi
echo "Deploying image tag: $IMAGE_TAG"
# Pull new images
IMAGE_TAG=$IMAGE_TAG docker compose pull api frontend
# Update API with rolling restart
# Docker Compose doesn't do true rolling deploys — use --no-deps + wait
IMAGE_TAG=$IMAGE_TAG docker compose up -d --no-deps api
# Wait for health check to pass
echo "Waiting for API health check..."
for i in {1..30}; do
if docker compose ps api | grep -q "healthy"; then
echo "API is healthy"
break
fi
if [ $i -eq 30 ]; then
echo "API failed health check — rolling back"
IMAGE_TAG=$(docker compose images api --quiet | head -1)
docker compose up -d --no-deps api
exit 1
fi
sleep 2
done
# Update frontend
IMAGE_TAG=$IMAGE_TAG docker compose up -d --no-deps frontend
echo "Deploy complete: $IMAGE_TAG"
When to Move to Kubernetes
Here's the configuration for a small Kubernetes setup (using k3s on VPS — lightweight k8s):
# Install k3s (single-node or 3-node cluster)
curl -sfL https://get.k3s.io | sh -
# Or for a multi-node cluster:
# On master:
curl -sfL https://get.k3s.io | K3S_TOKEN=SECRET sh -
# On worker:
curl -sfL https://get.k3s.io | K3S_URL=https://master-ip:6443 K3S_TOKEN=SECRET sh -
Kubernetes deployment with rolling updates and autoscaling:
# k8s/api-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: production
spec:
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Create 1 extra pod during deploy
maxUnavailable: 0 # Never kill a pod before new one is healthy
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: ghcr.io/your-org/api:latest
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: api-secrets
key: database-url
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: api
namespace: production
spec:
selector:
app: api
ports:
- port: 80
targetPort: 3000
---
# Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: api-hpa
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Ingress with automatic SSL (cert-manager + Let's Encrypt):
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
namespace: production
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/rate-limit-window: "1m"
spec:
ingressClassName: nginx
tls:
- hosts:
- api.yourdomain.com
secretName: api-tls
rules:
- host: api.yourdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 80
CI/CD Pipeline for Both
GitHub Actions → Docker Compose Deploy
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
jobs:
build-and-push:
runs-on: ubuntu-latest
outputs:
image-tag: ${{ steps.meta.outputs.version }}
steps:
- uses: actions/checkout@v4
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}/api
tags: type=sha,prefix=sha-
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push API
uses: docker/build-push-action@v5
with:
context: ./backend
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
needs: build-and-push
runs-on: ubuntu-latest
steps:
- name: Deploy to VPS
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.VPS_HOST }}
username: deploy
key: ${{ secrets.VPS_SSH_KEY }}
script: |
cd /opt/app
./deploy.sh ${{ needs.build-and-push.outputs.image-tag }}
GitHub Actions → Kubernetes Deploy
deploy-k8s:
needs: build-and-push
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up kubectl
uses: azure/setup-kubectl@v3
- name: Configure kubeconfig
run: |
mkdir -p ~/.kube
echo "${{ secrets.KUBECONFIG }}" > ~/.kube/config
chmod 600 ~/.kube/config
- name: Update image tag
run: |
kubectl set image deployment/api \
api=ghcr.io/${{ github.repository }}/api:${{ needs.build-and-push.outputs.image-tag }} \
-n production
- name: Wait for rollout
run: |
kubectl rollout status deployment/api -n production --timeout=300s
Database Backups: Critical for Both
This is where teams get caught regardless of orchestrator:
#!/bin/bash
# backup-postgres.sh — runs daily via cron on the VPS
set -e
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="backup_${TIMESTAMP}.sql.gz"
S3_BUCKET="s3://your-backup-bucket/postgres"
# Create backup (works with both Compose and k8s)
# For Docker Compose:
docker compose exec -T postgres pg_dump -U app appdb | gzip > "/tmp/${BACKUP_FILE}"
# For Kubernetes:
# kubectl exec -n production deploy/postgres -- pg_dump -U app appdb | gzip > "/tmp/${BACKUP_FILE}"
# Upload to S3 (also works: Backblaze B2, Cloudflare R2)
aws s3 cp "/tmp/${BACKUP_FILE}" "${S3_BUCKET}/${BACKUP_FILE}"
# Clean up local file
rm "/tmp/${BACKUP_FILE}"
# Delete backups older than 30 days
aws s3 ls "${S3_BUCKET}/" | awk '{print $4}' | while read -r key; do
created=$(aws s3api head-object --bucket your-backup-bucket --key "postgres/${key}" \
--query LastModified --output text)
age_days=$(( ($(date +%s) - $(date -d "$created" +%s)) / 86400 ))
if [ $age_days -gt 30 ]; then
aws s3 rm "${S3_BUCKET}/${key}"
fi
done
echo "Backup complete: ${BACKUP_FILE}"
Add to crontab:
# crontab -e
0 3 * * * /opt/app/scripts/backup-postgres.sh >> /var/log/backups.log 2>&1
Cost Comparison (2026)
| Setup | Monthly Cost | Complexity | Suitable For |
|---|---|---|---|
| Single VPS + Compose | $20-40 | Low | <10k users/day |
| 3x VPS + Compose + LB | $80-150 | Medium | Up to 50k users/day |
| k3s on 3 VPS | $60-120 | Medium-High | Rolling deploys needed |
| Managed k8s (GKE/EKS) | $150-400+ | High | >100k users/day |
For most applications serving CIS markets, a $20-40 VPS running Docker Compose handles the load comfortably.
Aunimeda sets up production infrastructure for web and mobile applications — from single-server Compose deployments to multi-region Kubernetes clusters.
Contact us to discuss your infrastructure needs. See also: DevOps Services, Custom Software Development, Web Development