Docker Compose: Multi-Container Orchestration
Master Docker Compose for defining.
Introduction and Setup
Docker Compose Orchestration: Introduction and Setup
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services, networks, and volumes, then create and start all services with a single command.
What is Docker Compose?
Docker Compose solves the complexity of managing multiple containers by providing:
- Declarative Configuration: Define your entire application stack in a single YAML file
- Service Orchestration: Manage dependencies between containers
- Environment Management: Easy switching between development, testing, and production
- Scaling: Scale services up or down with simple commands
- Networking: Automatic network creation and service discovery
Installation and Setup
Installing Docker Compose
Linux:
# Download the latest version
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# Make it executable
sudo chmod +x /usr/local/bin/docker-compose
# Verify installation
docker-compose --version
macOS (with Homebrew):
brew install docker-compose
Windows: Docker Compose is included with Docker Desktop for Windows.
Compose File Structure
A basic docker-compose.yml
file structure:
version: '3.8'
services:
web:
build: .
ports:
- "8000:8000"
volumes:
- .:/app
environment:
- DEBUG=1
depends_on:
- db
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
networks:
default:
driver: bridge
Your First Compose Application
Let’s create a simple web application with a database:
Project Structure
my-app/
├── docker-compose.yml
├── Dockerfile
├── app.py
├── requirements.txt
└── templates/
└── index.html
Flask Application
app.py:
from flask import Flask, render_template
import psycopg2
import os
app = Flask(__name__)
def get_db_connection():
conn = psycopg2.connect(
host=os.environ.get('DB_HOST', 'db'),
database=os.environ.get('DB_NAME', 'myapp'),
user=os.environ.get('DB_USER', 'user'),
password=os.environ.get('DB_PASSWORD', 'password')
)
return conn
@app.route('/')
def index():
try:
conn = get_db_connection()
cur = conn.cursor()
cur.execute('SELECT version();')
db_version = cur.fetchone()
cur.close()
conn.close()
return render_template('index.html', db_version=db_version[0])
except Exception as e:
return f"Database connection failed: {str(e)}"
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000, debug=True)
requirements.txt:
Flask==2.3.3
psycopg2-binary==2.9.7
templates/index.html:
<!DOCTYPE html>
<html>
<head>
<title>Docker Compose App</title>
</head>
<body>
<h1>Hello from Docker Compose!</h1>
<p>Database Version: {{ db_version }}</p>
</body>
</html>
Dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["python", "app.py"]
Docker Compose Configuration
docker-compose.yml:
version: '3.8'
services:
web:
build: .
ports:
- "8000:8000"
volumes:
- .:/app
environment:
- DB_HOST=db
- DB_NAME=myapp
- DB_USER=user
- DB_PASSWORD=password
depends_on:
- db
restart: unless-stopped
db:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
restart: unless-stopped
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
restart: unless-stopped
volumes:
postgres_data:
redis_data:
networks:
default:
driver: bridge
Essential Compose Commands
Basic Operations
# Start all services
docker-compose up
# Start in detached mode
docker-compose up -d
# Build and start
docker-compose up --build
# Stop all services
docker-compose down
# Stop and remove volumes
docker-compose down -v
# View running services
docker-compose ps
# View logs
docker-compose logs
# Follow logs for specific service
docker-compose logs -f web
Service Management
# Start specific service
docker-compose start web
# Stop specific service
docker-compose stop web
# Restart service
docker-compose restart web
# Scale service
docker-compose up --scale web=3
# Execute command in running container
docker-compose exec web bash
# Run one-off command
docker-compose run web python manage.py migrate
Environment Configuration
Environment Files
Create .env
file for environment variables:
# .env
POSTGRES_DB=myapp
POSTGRES_USER=user
POSTGRES_PASSWORD=secretpassword
DEBUG=1
SECRET_KEY=your-secret-key
Update docker-compose.yml
:
version: '3.8'
services:
web:
build: .
ports:
- "8000:8000"
env_file:
- .env
environment:
- DB_HOST=db
depends_on:
- db
db:
image: postgres:13
env_file:
- .env
volumes:
- postgres_data:/var/lib/postgresql/data
Multiple Environment Files
# docker-compose.override.yml (development)
version: '3.8'
services:
web:
volumes:
- .:/app
environment:
- DEBUG=1
command: python app.py
db:
ports:
- "5432:5432"
# docker-compose.prod.yml (production)
version: '3.8'
services:
web:
environment:
- DEBUG=0
restart: always
command: gunicorn --bind 0.0.0.0:8000 app:app
db:
restart: always
Run with specific configuration:
# Development (uses docker-compose.override.yml automatically)
docker-compose up
# Production
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
Networking in Compose
Default Network
Compose automatically creates a default network for your application:
version: '3.8'
services:
web:
image: nginx
# Can communicate with 'api' service using hostname 'api'
api:
image: node:16
# Can communicate with 'web' service using hostname 'web'
Custom Networks
version: '3.8'
services:
web:
image: nginx
networks:
- frontend
- backend
api:
image: node:16
networks:
- backend
db:
image: postgres:13
networks:
- backend
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No external access
External Networks
version: '3.8'
services:
web:
image: nginx
networks:
- existing-network
networks:
existing-network:
external: true
Volume Management
Named Volumes
version: '3.8'
services:
db:
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data
- postgres_config:/etc/postgresql
volumes:
postgres_data:
driver: local
postgres_config:
driver: local
driver_opts:
type: none
o: bind
device: /host/path/to/config
Bind Mounts
version: '3.8'
services:
web:
image: nginx
volumes:
- ./html:/usr/share/nginx/html:ro # Read-only
- ./logs:/var/log/nginx:rw # Read-write
- /etc/localtime:/etc/localtime:ro # Host timezone
External Volumes
version: '3.8'
services:
db:
image: postgres:13
volumes:
- existing_volume:/var/lib/postgresql/data
volumes:
existing_volume:
external: true
Health Checks and Dependencies
Health Checks
version: '3.8'
services:
web:
build: .
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
db:
image: postgres:13
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
interval: 10s
timeout: 5s
retries: 5
Service Dependencies
version: '3.8'
services:
web:
build: .
depends_on:
db:
condition: service_healthy
redis:
condition: service_started
db:
image: postgres:13
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
Practical Example: Development Environment
Let’s create a complete development environment:
docker-compose.dev.yml:
version: '3.8'
services:
# Frontend Development Server
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- ./frontend:/app
- /app/node_modules
environment:
- REACT_APP_API_URL=http://localhost:8000
command: npm start
# Backend API
backend:
build:
context: ./backend
dockerfile: Dockerfile.dev
ports:
- "8000:8000"
volumes:
- ./backend:/app
environment:
- DATABASE_URL=postgresql://user:password@db:5432/devdb
- REDIS_URL=redis://redis:6379
- DEBUG=1
depends_on:
- db
- redis
command: python manage.py runserver 0.0.0.0:8000
# Database
db:
image: postgres:13
environment:
POSTGRES_DB: devdb
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- postgres_dev_data:/var/lib/postgresql/data
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
# Redis Cache
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_dev_data:/data
# Database Admin Interface
pgadmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: [email protected]
PGADMIN_DEFAULT_PASSWORD: admin
ports:
- "5050:80"
volumes:
- pgadmin_data:/var/lib/pgadmin
volumes:
postgres_dev_data:
redis_dev_data:
pgadmin_data:
Start the development environment:
docker-compose -f docker-compose.dev.yml up --build
Summary
In this introduction, you’ve learned:
Core Concepts
- Docker Compose Purpose: Orchestrating multi-container applications
- YAML Configuration: Declarative service definitions
- Service Dependencies: Managing container startup order
- Environment Management: Different configurations for different stages
Essential Skills
- Installation and Setup: Getting Compose ready for development
- Basic Commands: Starting, stopping, and managing services
- Networking: Service discovery and custom networks
- Volume Management: Persistent data and bind mounts
- Health Checks: Ensuring service reliability
Practical Applications
- Development Environment: Complete local development stack
- Environment Configuration: Using .env files and overrides
- Service Communication: Inter-container networking
- Data Persistence: Volume management strategies
Next Steps: In Part 2, we’ll dive deeper into core concepts including advanced service configuration, networking patterns, and volume strategies that form the foundation of production-ready Compose applications.
Core Concepts and Fundamentals
Docker Compose Core Concepts and Fundamentals
This section explores the fundamental concepts that make Docker Compose a powerful orchestration tool, covering advanced service configuration, networking patterns, and volume management strategies.
Service Configuration Deep Dive
Build Context and Arguments
version: '3.8'
services:
web:
build:
context: ./web
dockerfile: Dockerfile.prod
args:
- NODE_ENV=production
- API_VERSION=v2
target: production
cache_from:
- node:16-alpine
- myapp:latest
image: myapp:${TAG:-latest}
Multi-stage Dockerfile with build args:
# Dockerfile.prod
ARG NODE_ENV=development
ARG API_VERSION=v1
FROM node:16-alpine AS base
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
FROM base AS development
RUN npm ci
COPY . .
CMD ["npm", "run", "dev"]
FROM base AS production
COPY . .
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
Resource Constraints
version: '3.8'
services:
web:
image: nginx
deploy:
resources:
limits:
cpus: '0.50'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
worker:
image: myapp:worker
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
rollback_config:
parallelism: 1
delay: 5s
Environment Variable Patterns
version: '3.8'
services:
app:
image: myapp
environment:
# Direct assignment
- NODE_ENV=production
- PORT=3000
# From host environment
- SECRET_KEY=${SECRET_KEY}
- DATABASE_URL=${DATABASE_URL:-postgresql://localhost:5432/myapp}
# Computed values
- APP_URL=https://${DOMAIN:-localhost}:${PORT:-3000}
env_file:
- .env
- .env.local
- .env.${NODE_ENV:-development}
.env file structure:
# .env
NODE_ENV=development
DEBUG=1
LOG_LEVEL=info
# Database
DATABASE_HOST=db
DATABASE_PORT=5432
DATABASE_NAME=myapp
DATABASE_USER=user
DATABASE_PASSWORD=password
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_DB=0
# External Services
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=[email protected]
SMTP_PASSWORD=app-password
Advanced Networking
Custom Network Configuration
version: '3.8'
services:
web:
image: nginx
networks:
frontend:
aliases:
- web-server
- nginx-proxy
backend:
ipv4_address: 172.20.0.10
api:
image: node:16
networks:
- backend
- database
db:
image: postgres:13
networks:
database:
aliases:
- postgres-server
networks:
frontend:
driver: bridge
ipam:
config:
- subnet: 172.19.0.0/16
gateway: 172.19.0.1
backend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
database:
driver: bridge
internal: true # No external access
ipam:
config:
- subnet: 172.21.0.0/16
Network Isolation Patterns
version: '3.8'
services:
# Public-facing services
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
networks:
- frontend
# Application services
web:
build: ./web
networks:
- frontend
- backend
depends_on:
- api
api:
build: ./api
networks:
- backend
- database
depends_on:
- db
- redis
# Data services (isolated)
db:
image: postgres:13
networks:
- database
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
networks:
- database
volumes:
- redis_data:/data
networks:
frontend:
driver: bridge
backend:
driver: bridge
database:
driver: bridge
internal: true
volumes:
postgres_data:
redis_data:
Service Discovery and Load Balancing
version: '3.8'
services:
nginx:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- web
networks:
- frontend
web:
build: ./web
deploy:
replicas: 3
networks:
- frontend
- backend
environment:
- API_URL=http://api:8000
api:
build: ./api
deploy:
replicas: 2
networks:
- backend
environment:
- DATABASE_URL=postgresql://user:pass@db:5432/myapp
networks:
frontend:
backend:
nginx.conf for load balancing:
events {
worker_connections 1024;
}
http {
upstream web_servers {
server web:3000;
# Docker Compose automatically load balances across replicas
}
server {
listen 80;
location / {
proxy_pass http://web_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
Volume Management Strategies
Volume Types and Use Cases
version: '3.8'
services:
web:
image: nginx
volumes:
# Named volume for persistent data
- web_data:/var/www/html
# Bind mount for development
- ./src:/var/www/html:ro
# Tmpfs for temporary files
- type: tmpfs
target: /tmp
tmpfs:
size: 100M
# Volume with specific options
- type: volume
source: web_logs
target: /var/log/nginx
volume:
nocopy: true
db:
image: postgres:13
volumes:
# Database data persistence
- postgres_data:/var/lib/postgresql/data
# Configuration files
- ./postgres/postgresql.conf:/etc/postgresql/postgresql.conf:ro
# Initialization scripts
- ./postgres/init:/docker-entrypoint-initdb.d:ro
# Backup location
- backup_data:/backup
volumes:
web_data:
driver: local
web_logs:
driver: local
driver_opts:
type: none
o: bind
device: /host/logs/web
postgres_data:
driver: local
driver_opts:
type: none
o: bind
device: /data/postgres
backup_data:
external: true
Volume Backup and Restore
version: '3.8'
services:
app:
image: myapp
volumes:
- app_data:/data
backup:
image: alpine
volumes:
- app_data:/source:ro
- backup_storage:/backup
command: |
sh -c "
tar czf /backup/app_data_$$(date +%Y%m%d_%H%M%S).tar.gz -C /source .
find /backup -name 'app_data_*.tar.gz' -mtime +7 -delete
"
profiles:
- backup
restore:
image: alpine
volumes:
- app_data:/target
- backup_storage:/backup
command: |
sh -c "
if [ -f /backup/restore.tar.gz ]; then
cd /target && tar xzf /backup/restore.tar.gz
else
echo 'No restore file found'
fi
"
profiles:
- restore
volumes:
app_data:
backup_storage:
Run backup/restore:
# Create backup
docker-compose --profile backup run --rm backup
# Restore from backup
cp backup_file.tar.gz backup_storage/restore.tar.gz
docker-compose --profile restore run --rm restore
Configuration Management
Secrets Management
version: '3.8'
services:
web:
image: myapp
secrets:
- db_password
- api_key
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password
- API_KEY_FILE=/run/secrets/api_key
db:
image: postgres:13
secrets:
- db_password
environment:
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
external: true
Configuration Files
version: '3.8'
services:
nginx:
image: nginx
configs:
- source: nginx_config
target: /etc/nginx/nginx.conf
mode: 0644
- source: ssl_cert
target: /etc/ssl/certs/server.crt
mode: 0644
app:
image: myapp
configs:
- source: app_config
target: /app/config.json
configs:
nginx_config:
file: ./config/nginx.conf
ssl_cert:
file: ./certs/server.crt
app_config:
external: true
Template-based Configuration
docker-compose.template.yml:
version: '3.8'
services:
web:
image: ${WEB_IMAGE}:${WEB_TAG}
ports:
- "${WEB_PORT}:3000"
environment:
- NODE_ENV=${NODE_ENV}
- API_URL=${API_URL}
db:
image: postgres:${POSTGRES_VERSION}
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
Generate configuration script:
#!/bin/bash
# generate-compose.sh
export WEB_IMAGE="myapp"
export WEB_TAG="${1:-latest}"
export WEB_PORT="${2:-3000}"
export NODE_ENV="${3:-production}"
export API_URL="https://api.${DOMAIN}"
export POSTGRES_VERSION="13"
export DB_NAME="myapp"
export DB_USER="user"
export DB_PASSWORD="$(openssl rand -base64 32)"
envsubst < docker-compose.template.yml > docker-compose.yml
Service Dependencies and Health Checks
Advanced Dependency Management
version: '3.8'
services:
web:
build: ./web
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
migration:
condition: service_completed_successfully
migration:
build: ./web
command: python manage.py migrate
depends_on:
db:
condition: service_healthy
restart: "no"
db:
image: postgres:13
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
redis:
image: redis:7-alpine
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 3s
retries: 3
Custom Health Check Scripts
health-check.sh:
#!/bin/bash
# Custom health check for web application
# Check if application is responding
if curl -f http://localhost:3000/health > /dev/null 2>&1; then
echo "Application is healthy"
exit 0
else
echo "Application health check failed"
exit 1
fi
version: '3.8'
services:
web:
build: ./web
healthcheck:
test: ["CMD", "/app/health-check.sh"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
Wait Strategies
wait-for-it.sh integration:
version: '3.8'
services:
web:
build: ./web
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
depends_on:
- db
api:
build: ./api
command: |
sh -c "
./wait-for-it.sh db:5432 --timeout=60 --strict -- \
./wait-for-it.sh redis:6379 --timeout=30 --strict -- \
python manage.py migrate && \
python manage.py runserver 0.0.0.0:8000
"
depends_on:
- db
- redis
Scaling and Performance
Horizontal Scaling
version: '3.8'
services:
web:
build: ./web
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
monitor: 60s
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
worker:
build: ./worker
deploy:
replicas: 5
resources:
limits:
cpus: '0.5'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
Scale services dynamically:
# Scale web service to 5 replicas
docker-compose up --scale web=5
# Scale multiple services
docker-compose up --scale web=3 --scale worker=10
Performance Optimization
version: '3.8'
services:
web:
build: ./web
# Optimize container startup
init: true
# Limit log size
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Resource limits
mem_limit: 512m
memswap_limit: 512m
cpu_count: 2
cpu_percent: 50
nginx:
image: nginx:alpine
# Use tmpfs for temporary files
tmpfs:
- /var/cache/nginx:noexec,nosuid,size=100m
- /tmp:noexec,nosuid,size=50m
# Optimize shared memory
shm_size: 128m
Summary
In this section, you’ve mastered:
Advanced Service Configuration
- Build Contexts: Multi-stage builds with arguments and caching
- Resource Management: CPU and memory limits with deployment strategies
- Environment Patterns: Complex variable management and file structures
Networking Mastery
- Custom Networks: IPAM configuration and network isolation
- Service Discovery: Load balancing and inter-service communication
- Security Patterns: Network segmentation and access control
Volume Strategies
- Volume Types: Named volumes, bind mounts, and tmpfs usage
- Data Management: Backup, restore, and migration strategies
- Performance: Optimized volume configurations
Configuration Management
- Secrets: Secure credential handling
- Templates: Dynamic configuration generation
- Health Checks: Advanced dependency and readiness management
Next Steps: In Part 3, we’ll explore practical applications including real-world multi-service architectures, development workflows, and production deployment patterns that demonstrate these concepts in action.
Practical Applications and Examples
This section demonstrates real-world Docker Compose applications across different scenarios, from development environments to production-ready multi-service architectures.
Full-Stack Web Application
MEAN Stack Application
version: '3.8'
services:
# Angular Frontend
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
ports:
- "4200:4200"
volumes:
- ./frontend:/app
- /app/node_modules
environment:
- API_URL=http://localhost:3000/api
depends_on:
- backend
# Express.js Backend
backend:
build:
context: ./backend
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- ./backend:/app
- /app/node_modules
environment:
- NODE_ENV=development
- MONGODB_URI=mongodb://mongo:27017/meanapp
- JWT_SECRET=your-jwt-secret
- PORT=3000
depends_on:
- mongo
- redis
# MongoDB Database
mongo:
image: mongo:5
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
- MONGO_INITDB_DATABASE=meanapp
# Redis Cache
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes
# Nginx Reverse Proxy
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- frontend
- backend
volumes:
mongo_data:
redis_data:
Microservices E-Commerce Platform
version: '3.8'
services:
# API Gateway
api-gateway:
build: ./services/api-gateway
ports:
- "8080:8080"
environment:
- USER_SERVICE_URL=http://user-service:3001
- PRODUCT_SERVICE_URL=http://product-service:3002
- ORDER_SERVICE_URL=http://order-service:3003
- PAYMENT_SERVICE_URL=http://payment-service:3004
depends_on:
- user-service
- product-service
- order-service
- payment-service
# User Service
user-service:
build: ./services/user-service
environment:
- DATABASE_URL=postgresql://user:password@user-db:5432/users
- REDIS_URL=redis://redis:6379/0
depends_on:
- user-db
- redis
user-db:
image: postgres:13
environment:
- POSTGRES_DB=users
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
volumes:
- user_db_data:/var/lib/postgresql/data
# Product Service
product-service:
build: ./services/product-service
environment:
- DATABASE_URL=postgresql://user:password@product-db:5432/products
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- product-db
- elasticsearch
product-db:
image: postgres:13
environment:
- POSTGRES_DB=products
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
volumes:
- product_db_data:/var/lib/postgresql/data
# Order Service
order-service:
build: ./services/order-service
environment:
- DATABASE_URL=postgresql://user:password@order-db:5432/orders
- RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672/
depends_on:
- order-db
- rabbitmq
order-db:
image: postgres:13
environment:
- POSTGRES_DB=orders
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
volumes:
- order_db_data:/var/lib/postgresql/data
# Payment Service
payment-service:
build: ./services/payment-service
environment:
- DATABASE_URL=postgresql://user:password@payment-db:5432/payments
- STRIPE_SECRET_KEY=${STRIPE_SECRET_KEY}
depends_on:
- payment-db
payment-db:
image: postgres:13
environment:
- POSTGRES_DB=payments
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
volumes:
- payment_db_data:/var/lib/postgresql/data
# Shared Services
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
rabbitmq:
image: rabbitmq:3-management
ports:
- "15672:15672"
environment:
- RABBITMQ_DEFAULT_USER=guest
- RABBITMQ_DEFAULT_PASS=guest
volumes:
- rabbitmq_data:/var/lib/rabbitmq
elasticsearch:
image: elasticsearch:7.17.0
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
volumes:
user_db_data:
product_db_data:
order_db_data:
payment_db_data:
redis_data:
rabbitmq_data:
elasticsearch_data:
Development Environment with Hot Reload
version: '3.8'
services:
# React Development Server
frontend:
build:
context: ./frontend
dockerfile: Dockerfile.dev
ports:
- "3000:3000"
volumes:
- ./frontend:/app
- /app/node_modules
environment:
- CHOKIDAR_USEPOLLING=true
- REACT_APP_API_URL=http://localhost:8000
stdin_open: true
tty: true
# Django Development Server
backend:
build:
context: ./backend
dockerfile: Dockerfile.dev
ports:
- "8000:8000"
volumes:
- ./backend:/app
environment:
- DEBUG=1
- DATABASE_URL=postgresql://user:password@db:5432/devdb
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
command: python manage.py runserver 0.0.0.0:8000
# Celery Worker
celery:
build:
context: ./backend
dockerfile: Dockerfile.dev
volumes:
- ./backend:/app
environment:
- DEBUG=1
- DATABASE_URL=postgresql://user:password@db:5432/devdb
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
command: celery -A myproject worker -l info
# Celery Beat Scheduler
celery-beat:
build:
context: ./backend
dockerfile: Dockerfile.dev
volumes:
- ./backend:/app
environment:
- DEBUG=1
- DATABASE_URL=postgresql://user:password@db:5432/devdb
- REDIS_URL=redis://redis:6379/0
depends_on:
- db
- redis
command: celery -A myproject beat -l info
# Database
db:
image: postgres:13
environment:
- POSTGRES_DB=devdb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
# Redis
redis:
image: redis:7-alpine
ports:
- "6379:6379"
# Mailhog for Email Testing
mailhog:
image: mailhog/mailhog
ports:
- "1025:1025"
- "8025:8025"
volumes:
postgres_data:
CI/CD Pipeline Integration
version: '3.8'
services:
# Application Under Test
app:
build:
context: .
dockerfile: Dockerfile.test
environment:
- NODE_ENV=test
- DATABASE_URL=postgresql://test:test@test-db:5432/testdb
- REDIS_URL=redis://test-redis:6379/0
depends_on:
- test-db
- test-redis
command: npm test
# Test Database
test-db:
image: postgres:13
environment:
- POSTGRES_DB=testdb
- POSTGRES_USER=test
- POSTGRES_PASSWORD=test
tmpfs:
- /var/lib/postgresql/data
# Test Redis
test-redis:
image: redis:7-alpine
tmpfs:
- /data
# Integration Tests
integration-tests:
build:
context: .
dockerfile: Dockerfile.integration
environment:
- API_URL=http://app:3000
depends_on:
- app
command: npm run test:integration
# End-to-End Tests
e2e-tests:
build:
context: ./e2e
dockerfile: Dockerfile
environment:
- BASE_URL=http://app:3000
depends_on:
- app
command: npm run test:e2e
volumes:
- ./e2e/screenshots:/app/screenshots
- ./e2e/videos:/app/videos
Monitoring and Logging Stack
version: '3.8'
services:
# Application
app:
build: .
environment:
- LOG_LEVEL=info
logging:
driver: "fluentd"
options:
fluentd-address: localhost:24224
tag: app.logs
depends_on:
- fluentd
# Prometheus for Metrics
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
# Grafana for Visualization
grafana:
image: grafana/grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/datasources:/etc/grafana/provisioning/datasources
# Fluentd for Log Collection
fluentd:
build: ./fluentd
ports:
- "24224:24224"
volumes:
- ./fluentd/conf:/fluentd/etc
depends_on:
- elasticsearch
# Elasticsearch for Log Storage
elasticsearch:
image: elasticsearch:7.17.0
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
# Kibana for Log Visualization
kibana:
image: kibana:7.17.0
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
volumes:
prometheus_data:
grafana_data:
elasticsearch_data:
Multi-Environment Configuration
Base Configuration
# docker-compose.yml
version: '3.8'
services:
web:
build: .
environment:
- NODE_ENV=${NODE_ENV:-development}
depends_on:
- db
db:
image: postgres:13
environment:
- POSTGRES_DB=${DB_NAME:-myapp}
- POSTGRES_USER=${DB_USER:-user}
- POSTGRES_PASSWORD=${DB_PASSWORD:-password}
volumes:
postgres_data:
Development Override
# docker-compose.override.yml
version: '3.8'
services:
web:
ports:
- "3000:3000"
volumes:
- .:/app
- /app/node_modules
environment:
- DEBUG=1
command: npm run dev
db:
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
Production Configuration
# docker-compose.prod.yml
version: '3.8'
services:
web:
image: myapp:${TAG}
restart: always
environment:
- NODE_ENV=production
deploy:
replicas: 3
resources:
limits:
memory: 512M
reservations:
memory: 256M
db:
restart: always
volumes:
- /data/postgres:/var/lib/postgresql/data
deploy:
resources:
limits:
memory: 1G
reservations:
memory: 512M
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.prod.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/ssl/certs
depends_on:
- web
Summary
This section demonstrated practical Docker Compose applications:
Real-World Architectures
- Full-Stack Applications: MEAN stack with proper service separation
- Microservices: E-commerce platform with multiple databases and message queues
- Development Environments: Hot reload and debugging capabilities
Operational Patterns
- CI/CD Integration: Testing pipelines with isolated environments
- Monitoring Stack: Complete observability with Prometheus, Grafana, and ELK
- Multi-Environment: Development, staging, and production configurations
Best Practices Applied
- Service Isolation: Proper networking and dependency management
- Data Persistence: Volume strategies for different data types
- Configuration Management: Environment-specific overrides and secrets
Next Steps: Part 4 covers advanced techniques including custom networks, service mesh integration, and complex orchestration patterns for enterprise applications.
Advanced Techniques and Patterns
Advanced Docker Compose Techniques and Patterns
This section explores sophisticated Docker Compose patterns for enterprise applications, including service mesh integration, advanced networking, and complex orchestration scenarios.
Service Mesh Integration with Envoy
version: '3.8'
services:
# Envoy Proxy as Service Mesh
envoy:
image: envoyproxy/envoy:v1.24.0
ports:
- "10000:10000"
- "9901:9901"
volumes:
- ./envoy.yaml:/etc/envoy/envoy.yaml
command: /usr/local/bin/envoy -c /etc/envoy/envoy.yaml
# Service A with Sidecar
service-a:
build: ./service-a
environment:
- SERVICE_NAME=service-a
- ENVOY_ADMIN_PORT=9901
depends_on:
- envoy
service-a-envoy:
image: envoyproxy/envoy:v1.24.0
volumes:
- ./envoy-sidecar-a.yaml:/etc/envoy/envoy.yaml
network_mode: "service:service-a"
depends_on:
- service-a
# Service B with Sidecar
service-b:
build: ./service-b
environment:
- SERVICE_NAME=service-b
depends_on:
- envoy
service-b-envoy:
image: envoyproxy/envoy:v1.24.0
volumes:
- ./envoy-sidecar-b.yaml:/etc/envoy/envoy.yaml
network_mode: "service:service-b"
depends_on:
- service-b
networks:
default:
driver: bridge
Advanced Networking Patterns
Multi-Tier Network Architecture
version: '3.8'
services:
# Load Balancer Tier
haproxy:
image: haproxy:2.6
ports:
- "80:80"
- "443:443"
- "8404:8404" # Stats
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
networks:
- frontend
depends_on:
- web1
- web2
# Web Tier
web1:
build: ./web
networks:
- frontend
- backend
environment:
- INSTANCE_ID=web1
web2:
build: ./web
networks:
- frontend
- backend
environment:
- INSTANCE_ID=web2
# Application Tier
app1:
build: ./app
networks:
- backend
- database
environment:
- INSTANCE_ID=app1
app2:
build: ./app
networks:
- backend
- database
environment:
- INSTANCE_ID=app2
# Database Tier
db-master:
image: postgres:13
networks:
- database
environment:
- POSTGRES_REPLICATION_MODE=master
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=replicator_password
volumes:
- db_master_data:/var/lib/postgresql/data
db-slave:
image: postgres:13
networks:
- database
environment:
- POSTGRES_REPLICATION_MODE=slave
- POSTGRES_MASTER_HOST=db-master
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=replicator_password
depends_on:
- db-master
networks:
frontend:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
backend:
driver: bridge
internal: true
ipam:
config:
- subnet: 172.21.0.0/24
database:
driver: bridge
internal: true
ipam:
config:
- subnet: 172.22.0.0/24
volumes:
db_master_data:
Network Policies and Security
version: '3.8'
services:
# DMZ Services
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
networks:
dmz:
ipv4_address: 172.30.1.10
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
read_only: true
tmpfs:
- /var/cache/nginx:noexec,nosuid,size=100m
# Application Services
api:
build: ./api
networks:
app_tier:
ipv4_address: 172.30.2.10
security_opt:
- no-new-privileges:true
user: "1000:1000"
read_only: true
tmpfs:
- /tmp:noexec,nosuid,size=50m
# Database Services
postgres:
image: postgres:13
networks:
data_tier:
ipv4_address: 172.30.3.10
security_opt:
- no-new-privileges:true
user: postgres
volumes:
- postgres_data:/var/lib/postgresql/data:Z
networks:
dmz:
driver: bridge
ipam:
config:
- subnet: 172.30.1.0/24
app_tier:
driver: bridge
internal: true
ipam:
config:
- subnet: 172.30.2.0/24
data_tier:
driver: bridge
internal: true
ipam:
config:
- subnet: 172.30.3.0/24
volumes:
postgres_data:
Complex Orchestration Patterns
Event-Driven Architecture
version: '3.8'
services:
# Event Bus
kafka:
image: confluentinc/cp-kafka:latest
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
depends_on:
- zookeeper
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
# Event Producers
order-service:
build: ./services/order
environment:
- KAFKA_BROKERS=kafka:9092
- DATABASE_URL=postgresql://user:pass@order-db:5432/orders
depends_on:
- kafka
- order-db
# Event Consumers
inventory-service:
build: ./services/inventory
environment:
- KAFKA_BROKERS=kafka:9092
- DATABASE_URL=postgresql://user:pass@inventory-db:5432/inventory
depends_on:
- kafka
- inventory-db
notification-service:
build: ./services/notification
environment:
- KAFKA_BROKERS=kafka:9092
- SMTP_HOST=mailhog
- SMTP_PORT=1025
depends_on:
- kafka
- mailhog
# Event Processing
analytics-processor:
build: ./processors/analytics
environment:
- KAFKA_BROKERS=kafka:9092
- ELASTICSEARCH_URL=http://elasticsearch:9200
depends_on:
- kafka
- elasticsearch
# Supporting Services
order-db:
image: postgres:13
environment:
POSTGRES_DB: orders
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
inventory-db:
image: postgres:13
environment:
POSTGRES_DB: inventory
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
elasticsearch:
image: elasticsearch:7.17.0
environment:
- discovery.type=single-node
mailhog:
image: mailhog/mailhog
ports:
- "8025:8025"
CQRS Pattern Implementation
version: '3.8'
services:
# Command Side
command-api:
build: ./command-api
ports:
- "8080:8080"
environment:
- DATABASE_URL=postgresql://user:pass@write-db:5432/commands
- EVENT_STORE_URL=http://eventstore:2113
depends_on:
- write-db
- eventstore
# Query Side
query-api:
build: ./query-api
ports:
- "8081:8081"
environment:
- DATABASE_URL=postgresql://user:pass@read-db:5432/queries
- REDIS_URL=redis://redis:6379
depends_on:
- read-db
- redis
# Event Store
eventstore:
image: eventstore/eventstore:21.10.0-buster-slim
ports:
- "2113:2113"
environment:
- EVENTSTORE_CLUSTER_SIZE=1
- EVENTSTORE_RUN_PROJECTIONS=All
- EVENTSTORE_START_STANDARD_PROJECTIONS=true
volumes:
- eventstore_data:/var/lib/eventstore
# Projection Processors
projection-processor:
build: ./projection-processor
environment:
- EVENT_STORE_URL=http://eventstore:2113
- READ_DATABASE_URL=postgresql://user:pass@read-db:5432/queries
depends_on:
- eventstore
- read-db
# Databases
write-db:
image: postgres:13
environment:
POSTGRES_DB: commands
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
volumes:
- write_db_data:/var/lib/postgresql/data
read-db:
image: postgres:13
environment:
POSTGRES_DB: queries
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
volumes:
- read_db_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
volumes:
eventstore_data:
write_db_data:
read_db_data:
redis_data:
Advanced Volume and Storage Patterns
Distributed Storage with GlusterFS
version: '3.8'
services:
# GlusterFS Nodes
gluster1:
image: gluster/gluster-centos
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- gluster1_data:/data
hostname: gluster1
networks:
storage:
ipv4_address: 172.25.0.10
gluster2:
image: gluster/gluster-centos
privileged: true
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- gluster2_data:/data
hostname: gluster2
networks:
storage:
ipv4_address: 172.25.0.11
# Application using distributed storage
app:
build: ./app
volumes:
- type: volume
source: distributed_storage
target: /app/data
volume:
driver: local
driver_opts:
type: glusterfs
o: "addr=172.25.0.10,addr=172.25.0.11"
device: "gluster-volume"
depends_on:
- gluster1
- gluster2
networks:
- storage
- app
networks:
storage:
driver: bridge
ipam:
config:
- subnet: 172.25.0.0/24
app:
driver: bridge
volumes:
gluster1_data:
gluster2_data:
distributed_storage:
external: true
Backup and Disaster Recovery
version: '3.8'
services:
# Primary Application
app:
build: ./app
volumes:
- app_data:/data
environment:
- BACKUP_ENABLED=true
- BACKUP_SCHEDULE=0 2 * * *
# Backup Service
backup:
image: alpine
volumes:
- app_data:/source:ro
- backup_storage:/backup
- ./backup-scripts:/scripts:ro
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- S3_BUCKET=${BACKUP_S3_BUCKET}
command: |
sh -c "
apk add --no-cache aws-cli
while true; do
/scripts/backup.sh
sleep 86400
done
"
# Disaster Recovery Testing
dr-test:
build: ./app
volumes:
- dr_data:/data
- backup_storage:/backup:ro
environment:
- RESTORE_MODE=true
profiles:
- disaster-recovery
command: |
sh -c "
echo 'Starting disaster recovery test...'
/scripts/restore.sh
/scripts/verify.sh
"
volumes:
app_data:
backup_storage:
dr_data:
Performance Optimization Patterns
Connection Pooling and Caching
version: '3.8'
services:
# Application with Connection Pooling
app:
build: ./app
environment:
- DATABASE_URL=postgresql://user:pass@pgbouncer:5432/myapp
- REDIS_URL=redis://redis-cluster:6379
depends_on:
- pgbouncer
- redis-cluster
# PgBouncer Connection Pooler
pgbouncer:
image: pgbouncer/pgbouncer:latest
environment:
- DATABASES_HOST=postgres
- DATABASES_PORT=5432
- DATABASES_USER=user
- DATABASES_PASSWORD=pass
- DATABASES_DBNAME=myapp
- POOL_MODE=transaction
- MAX_CLIENT_CONN=100
- DEFAULT_POOL_SIZE=25
depends_on:
- postgres
# Redis Cluster for Caching
redis-cluster:
image: redis:7-alpine
command: |
sh -c "
redis-server --cluster-enabled yes \
--cluster-config-file nodes.conf \
--cluster-node-timeout 5000 \
--appendonly yes \
--maxmemory 256mb \
--maxmemory-policy allkeys-lru
"
volumes:
- redis_data:/data
# Database
postgres:
image: postgres:13
environment:
POSTGRES_DB: myapp
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
volumes:
- postgres_data:/var/lib/postgresql/data
- ./postgresql.conf:/etc/postgresql/postgresql.conf
command: postgres -c config_file=/etc/postgresql/postgresql.conf
volumes:
redis_data:
postgres_data:
Summary
This section covered advanced Docker Compose techniques:
Enterprise Patterns
- Service Mesh: Envoy proxy integration for microservices communication
- Multi-Tier Architecture: Proper network segmentation and security
- Event-Driven Systems: Kafka-based event processing and CQRS patterns
Advanced Networking
- Network Policies: Security-focused network configuration
- Service Discovery: Complex routing and load balancing
- Network Isolation: DMZ and internal network separation
Storage and Performance
- Distributed Storage: GlusterFS integration for scalable storage
- Disaster Recovery: Automated backup and recovery testing
- Performance Optimization: Connection pooling and caching strategies
Next Steps: Part 5 focuses on best practices and optimization techniques for production-ready Docker Compose deployments, including security hardening, monitoring, and operational excellence.
Best Practices and Optimization
Docker Compose Best Practices and Optimization
This section covers production-ready best practices, security hardening, performance optimization, and operational excellence for Docker Compose deployments.
Security Best Practices
Container Security Hardening
version: '3.8'
services:
web:
build: ./web
# Security configurations
user: "1000:1000" # Non-root user
read_only: true # Read-only filesystem
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
security_opt:
- no-new-privileges:true
- apparmor:docker-default
# Temporary filesystems for writable areas
tmpfs:
- /tmp:noexec,nosuid,size=100m
- /var/cache:noexec,nosuid,size=50m
# Resource limits
mem_limit: 512m
memswap_limit: 512m
cpu_count: 2
pids_limit: 100
# Health checks
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
database:
image: postgres:13-alpine
user: postgres
read_only: true
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- FOWNER
- SETGID
- SETUID
security_opt:
- no-new-privileges:true
# Secure volume mounts
volumes:
- postgres_data:/var/lib/postgresql/data:Z
- /dev/shm:/dev/shm:rw,noexec,nosuid,size=100m
tmpfs:
- /tmp:noexec,nosuid,size=50m
- /run:noexec,nosuid,size=50m
volumes:
postgres_data:
driver: local
driver_opts:
type: none
o: bind
device: /secure/postgres/data
Secrets Management
version: '3.8'
services:
app:
build: ./app
secrets:
- db_password
- api_key
- ssl_cert
- ssl_key
environment:
- DB_PASSWORD_FILE=/run/secrets/db_password
- API_KEY_FILE=/run/secrets/api_key
- SSL_CERT_FILE=/run/secrets/ssl_cert
- SSL_KEY_FILE=/run/secrets/ssl_key
vault:
image: vault:latest
cap_add:
- IPC_LOCK
environment:
- VAULT_DEV_ROOT_TOKEN_ID=${VAULT_ROOT_TOKEN}
- VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200
ports:
- "8200:8200"
volumes:
- vault_data:/vault/data
- ./vault-config:/vault/config
secrets:
db_password:
external: true
api_key:
external: true
ssl_cert:
file: ./secrets/ssl/cert.pem
ssl_key:
file: ./secrets/ssl/key.pem
volumes:
vault_data:
Network Security
version: '3.8'
services:
# WAF/Reverse Proxy
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- ./nginx/modsecurity:/etc/nginx/modsecurity:ro
networks:
- frontend
depends_on:
- app
app:
build: ./app
networks:
- frontend
- backend
# No exposed ports - only accessible through nginx
database:
image: postgres:13
networks:
- backend # Isolated from frontend
environment:
- POSTGRES_SSL_MODE=require
networks:
frontend:
driver: bridge
driver_opts:
com.docker.network.bridge.name: frontend
com.docker.network.bridge.enable_icc: "false"
backend:
driver: bridge
internal: true # No external access
driver_opts:
com.docker.network.bridge.name: backend
com.docker.network.bridge.enable_icc: "true"
Performance Optimization
Resource Management
version: '3.8'
services:
web:
build: ./web
deploy:
resources:
limits:
cpus: '1.0'
memory: 1G
reservations:
cpus: '0.5'
memory: 512M
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
# Optimize for performance
init: true # Proper signal handling
stop_grace_period: 30s
# Logging optimization
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
compress: "true"
database:
image: postgres:13
deploy:
resources:
limits:
cpus: '2.0'
memory: 2G
reservations:
cpus: '1.0'
memory: 1G
# Database-specific optimizations
shm_size: 256m
command: |
postgres
-c shared_buffers=512MB
-c effective_cache_size=1GB
-c maintenance_work_mem=128MB
-c checkpoint_completion_target=0.9
-c wal_buffers=16MB
-c default_statistics_target=100
-c random_page_cost=1.1
-c effective_io_concurrency=200
volumes:
- postgres_data:/var/lib/postgresql/data
- postgres_logs:/var/log/postgresql
volumes:
postgres_data:
postgres_logs:
Caching Strategies
version: '3.8'
services:
app:
build: ./app
environment:
- REDIS_URL=redis://redis:6379/0
- MEMCACHED_URL=memcached:11211
depends_on:
- redis
- memcached
# Redis for session storage and caching
redis:
image: redis:7-alpine
command: |
redis-server
--maxmemory 512mb
--maxmemory-policy allkeys-lru
--save 900 1
--save 300 10
--save 60 10000
--appendonly yes
--appendfsync everysec
volumes:
- redis_data:/data
sysctls:
- net.core.somaxconn=65535
# Memcached for application caching
memcached:
image: memcached:alpine
command: memcached -m 256 -c 1024 -I 4m
# Varnish for HTTP caching
varnish:
image: varnish:stable
ports:
- "80:80"
volumes:
- ./varnish/default.vcl:/etc/varnish/default.vcl:ro
environment:
- VARNISH_SIZE=256M
depends_on:
- app
command: |
varnishd -F
-a :80
-T :6082
-f /etc/varnish/default.vcl
-s malloc,256m
volumes:
redis_data:
Database Optimization
version: '3.8'
services:
# Master Database
postgres-master:
image: postgres:13
environment:
- POSTGRES_REPLICATION_MODE=master
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=${REPLICATION_PASSWORD}
volumes:
- postgres_master_data:/var/lib/postgresql/data
- ./postgres/master.conf:/etc/postgresql/postgresql.conf
- ./postgres/pg_hba.conf:/etc/postgresql/pg_hba.conf
command: |
postgres
-c config_file=/etc/postgresql/postgresql.conf
-c hba_file=/etc/postgresql/pg_hba.conf
# Read Replica
postgres-replica:
image: postgres:13
environment:
- POSTGRES_REPLICATION_MODE=slave
- POSTGRES_MASTER_HOST=postgres-master
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD=${REPLICATION_PASSWORD}
volumes:
- postgres_replica_data:/var/lib/postgresql/data
depends_on:
- postgres-master
# Connection Pooler
pgbouncer:
image: pgbouncer/pgbouncer:latest
environment:
- DATABASES_HOST=postgres-master
- DATABASES_PORT=5432
- DATABASES_USER=${DB_USER}
- DATABASES_PASSWORD=${DB_PASSWORD}
- DATABASES_DBNAME=${DB_NAME}
- POOL_MODE=transaction
- MAX_CLIENT_CONN=200
- DEFAULT_POOL_SIZE=50
- SERVER_RESET_QUERY=DISCARD ALL
depends_on:
- postgres-master
volumes:
postgres_master_data:
postgres_replica_data:
Monitoring and Observability
Comprehensive Monitoring Stack
version: '3.8'
services:
# Application with metrics
app:
build: ./app
environment:
- METRICS_ENABLED=true
- METRICS_PORT=9090
labels:
- "prometheus.io/scrape=true"
- "prometheus.io/port=9090"
- "prometheus.io/path=/metrics"
# Prometheus
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- ./prometheus/rules:/etc/prometheus/rules
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=30d'
- '--web.enable-lifecycle'
# Grafana
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
- GF_USERS_ALLOW_SIGN_UP=false
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
# AlertManager
alertmanager:
image: prom/alertmanager:latest
ports:
- "9093:9093"
volumes:
- ./alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml
- alertmanager_data:/alertmanager
# Node Exporter
node-exporter:
image: prom/node-exporter:latest
ports:
- "9100:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
# cAdvisor for container metrics
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
ports:
- "8080:8080"
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
volumes:
prometheus_data:
grafana_data:
alertmanager_data:
Logging Best Practices
version: '3.8'
services:
app:
build: ./app
logging:
driver: "fluentd"
options:
fluentd-address: "fluentd:24224"
tag: "app.{{.Name}}"
fluentd-async-connect: "true"
fluentd-retry-wait: "1s"
fluentd-max-retries: "30"
# Fluentd Log Aggregator
fluentd:
build: ./fluentd
ports:
- "24224:24224"
volumes:
- ./fluentd/conf:/fluentd/etc
- fluentd_data:/fluentd/log
environment:
- FLUENTD_CONF=fluent.conf
depends_on:
- elasticsearch
# Elasticsearch
elasticsearch:
image: elasticsearch:7.17.0
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- xpack.security.enabled=false
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
# Kibana
kibana:
image: kibana:7.17.0
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
depends_on:
- elasticsearch
volumes:
fluentd_data:
elasticsearch_data:
Deployment Best Practices
Blue-Green Deployment
# docker-compose.blue.yml
version: '3.8'
services:
app-blue:
build: ./app
image: myapp:${BLUE_VERSION}
environment:
- ENVIRONMENT=blue
- VERSION=${BLUE_VERSION}
networks:
- app-network
labels:
- "deployment=blue"
nginx-blue:
image: nginx:alpine
ports:
- "8080:80"
volumes:
- ./nginx/blue.conf:/etc/nginx/nginx.conf
depends_on:
- app-blue
networks:
- app-network
networks:
app-network:
external: true
# docker-compose.green.yml
version: '3.8'
services:
app-green:
build: ./app
image: myapp:${GREEN_VERSION}
environment:
- ENVIRONMENT=green
- VERSION=${GREEN_VERSION}
networks:
- app-network
labels:
- "deployment=green"
nginx-green:
image: nginx:alpine
ports:
- "8081:80"
volumes:
- ./nginx/green.conf:/etc/nginx/nginx.conf
depends_on:
- app-green
networks:
- app-network
networks:
app-network:
external: true
Health Checks and Graceful Shutdown
version: '3.8'
services:
app:
build: ./app
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
# Graceful shutdown
stop_signal: SIGTERM
stop_grace_period: 30s
# Proper init system
init: true
environment:
- SHUTDOWN_TIMEOUT=25 # Less than stop_grace_period
database:
image: postgres:13
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
stop_grace_period: 60s # Allow time for checkpoint
Summary
This section covered production-ready best practices:
Security Excellence
- Container Hardening: Non-root users, read-only filesystems, capability dropping
- Secrets Management: External secrets and secure credential handling
- Network Security: WAF integration, network isolation, and SSL/TLS
Performance Optimization
- Resource Management: CPU and memory limits with proper reservations
- Caching Strategies: Multi-layer caching with Redis, Memcached, and Varnish
- Database Optimization: Master-slave replication and connection pooling
Operational Excellence
- Monitoring: Comprehensive metrics, logging, and alerting
- Deployment Patterns: Blue-green deployments and graceful shutdowns
- Health Checks: Proper readiness and liveness probes
Next Steps: Part 6 demonstrates complete real-world implementations that combine all these best practices into production-ready systems with full CI/CD integration and operational monitoring.
Real-World Projects and Implementation
Real-World Docker Compose Projects and Implementation
This final section demonstrates complete production-ready implementations, combining all concepts learned throughout this guide into real-world systems with full CI/CD integration and operational excellence.
Project 1: Enterprise SaaS Platform
Complete Multi-Tenant Architecture
# docker-compose.prod.yml
version: '3.8'
services:
# Load Balancer with SSL Termination
traefik:
image: traefik:v2.9
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik/traefik.yml:/etc/traefik/traefik.yml:ro
- ./traefik/dynamic:/etc/traefik/dynamic:ro
- traefik_certs:/certs
networks:
- frontend
- monitoring
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=Host(`traefik.${DOMAIN}`)"
- "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
# Frontend Application
frontend:
image: ${REGISTRY}/frontend:${VERSION}
deploy:
replicas: 3
resources:
limits:
memory: 512M
reservations:
memory: 256M
networks:
- frontend
labels:
- "traefik.enable=true"
- "traefik.http.routers.frontend.rule=Host(`${DOMAIN}`) || Host(`www.${DOMAIN}`)"
- "traefik.http.routers.frontend.tls.certresolver=letsencrypt"
- "traefik.http.services.frontend.loadbalancer.server.port=80"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
# API Gateway
api-gateway:
image: ${REGISTRY}/api-gateway:${VERSION}
deploy:
replicas: 2
resources:
limits:
memory: 1G
reservations:
memory: 512M
networks:
- frontend
- backend
environment:
- JWT_SECRET_FILE=/run/secrets/jwt_secret
- DATABASE_URL=postgresql://api_user:${API_DB_PASSWORD}@postgres-master:5432/api_db
- REDIS_URL=redis://redis-cluster:6379/0
secrets:
- jwt_secret
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.rule=Host(`api.${DOMAIN}`)"
- "traefik.http.routers.api.tls.certresolver=letsencrypt"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
# User Service
user-service:
image: ${REGISTRY}/user-service:${VERSION}
deploy:
replicas: 2
networks:
- backend
- database
environment:
- DATABASE_URL=postgresql://user_svc:${USER_DB_PASSWORD}@postgres-master:5432/users_db
- REDIS_URL=redis://redis-cluster:6379/1
- ENCRYPTION_KEY_FILE=/run/secrets/encryption_key
secrets:
- encryption_key
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8001/health"]
interval: 30s
timeout: 10s
retries: 3
# Tenant Service
tenant-service:
image: ${REGISTRY}/tenant-service:${VERSION}
deploy:
replicas: 2
networks:
- backend
- database
environment:
- DATABASE_URL=postgresql://tenant_svc:${TENANT_DB_PASSWORD}@postgres-master:5432/tenants_db
- REDIS_URL=redis://redis-cluster:6379/2
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8002/health"]
interval: 30s
timeout: 10s
retries: 3
# Billing Service
billing-service:
image: ${REGISTRY}/billing-service:${VERSION}
networks:
- backend
- database
environment:
- DATABASE_URL=postgresql://billing_svc:${BILLING_DB_PASSWORD}@postgres-master:5432/billing_db
- STRIPE_SECRET_KEY_FILE=/run/secrets/stripe_secret
- WEBHOOK_SECRET_FILE=/run/secrets/stripe_webhook_secret
secrets:
- stripe_secret
- stripe_webhook_secret
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8003/health"]
interval: 30s
timeout: 10s
retries: 3
# Analytics Service
analytics-service:
image: ${REGISTRY}/analytics-service:${VERSION}
networks:
- backend
- database
environment:
- CLICKHOUSE_URL=http://clickhouse:8123
- KAFKA_BROKERS=kafka:9092
depends_on:
- clickhouse
- kafka
# Database Cluster
postgres-master:
image: postgres:14
environment:
- POSTGRES_REPLICATION_MODE=master
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD_FILE=/run/secrets/replication_password
- POSTGRES_MULTIPLE_DATABASES=api_db,users_db,tenants_db,billing_db
- POSTGRES_MULTIPLE_USERS=api_user,user_svc,tenant_svc,billing_svc
volumes:
- postgres_master_data:/var/lib/postgresql/data
- ./postgres/init-multiple-databases.sh:/docker-entrypoint-initdb.d/init-multiple-databases.sh
- ./postgres/postgresql.conf:/etc/postgresql/postgresql.conf
networks:
- database
secrets:
- replication_password
command: postgres -c config_file=/etc/postgresql/postgresql.conf
postgres-replica:
image: postgres:14
environment:
- POSTGRES_REPLICATION_MODE=slave
- POSTGRES_MASTER_HOST=postgres-master
- POSTGRES_REPLICATION_USER=replicator
- POSTGRES_REPLICATION_PASSWORD_FILE=/run/secrets/replication_password
volumes:
- postgres_replica_data:/var/lib/postgresql/data
networks:
- database
secrets:
- replication_password
depends_on:
- postgres-master
# Redis Cluster
redis-cluster:
image: redis:7-alpine
command: |
redis-server
--cluster-enabled yes
--cluster-config-file nodes.conf
--cluster-node-timeout 5000
--appendonly yes
--maxmemory 1gb
--maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
networks:
- backend
# ClickHouse for Analytics
clickhouse:
image: clickhouse/clickhouse-server:latest
environment:
- CLICKHOUSE_DB=analytics
- CLICKHOUSE_USER=analytics_user
- CLICKHOUSE_PASSWORD=${CLICKHOUSE_PASSWORD}
volumes:
- clickhouse_data:/var/lib/clickhouse
networks:
- database
# Kafka for Event Streaming
kafka:
image: confluentinc/cp-kafka:latest
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
volumes:
- kafka_data:/var/lib/kafka/data
networks:
- backend
depends_on:
- zookeeper
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
- ZOOKEEPER_CLIENT_PORT=2181
- ZOOKEEPER_TICK_TIME=2000
volumes:
- zookeeper_data:/var/lib/zookeeper/data
networks:
- backend
# Background Workers
worker:
image: ${REGISTRY}/worker:${VERSION}
deploy:
replicas: 3
networks:
- backend
- database
environment:
- CELERY_BROKER_URL=redis://redis-cluster:6379/3
- DATABASE_URL=postgresql://worker:${WORKER_DB_PASSWORD}@postgres-master:5432/jobs_db
command: celery -A app worker -l info -c 4
scheduler:
image: ${REGISTRY}/worker:${VERSION}
networks:
- backend
environment:
- CELERY_BROKER_URL=redis://redis-cluster:6379/3
command: celery -A app beat -l info
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true
database:
driver: bridge
internal: true
monitoring:
driver: bridge
volumes:
traefik_certs:
postgres_master_data:
postgres_replica_data:
redis_data:
clickhouse_data:
kafka_data:
zookeeper_data:
secrets:
jwt_secret:
external: true
encryption_key:
external: true
stripe_secret:
external: true
stripe_webhook_secret:
external: true
replication_password:
external: true
Monitoring and Observability Stack
# docker-compose.monitoring.yml
version: '3.8'
services:
# Prometheus
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- ./prometheus/rules:/etc/prometheus/rules
- prometheus_data:/prometheus
networks:
- monitoring
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=30d'
- '--web.enable-lifecycle'
- '--web.enable-admin-api'
# Grafana
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
- GF_USERS_ALLOW_SIGN_UP=false
- GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
networks:
- monitoring
# AlertManager
alertmanager:
image: prom/alertmanager:latest
ports:
- "9093:9093"
volumes:
- ./alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml
- alertmanager_data:/alertmanager
networks:
- monitoring
# Jaeger for Distributed Tracing
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686"
- "14268:14268"
environment:
- COLLECTOR_ZIPKIN_HTTP_PORT=9411
networks:
- monitoring
# ELK Stack for Logging
elasticsearch:
image: elasticsearch:7.17.0
environment:
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- xpack.security.enabled=false
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
networks:
- monitoring
logstash:
image: logstash:7.17.0
volumes:
- ./logstash/pipeline:/usr/share/logstash/pipeline
- ./logstash/config:/usr/share/logstash/config
networks:
- monitoring
depends_on:
- elasticsearch
kibana:
image: kibana:7.17.0
ports:
- "5601:5601"
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
networks:
- monitoring
depends_on:
- elasticsearch
networks:
monitoring:
external: true
volumes:
prometheus_data:
grafana_data:
alertmanager_data:
elasticsearch_data:
Project 2: CI/CD Pipeline with GitOps
Complete CI/CD Configuration
# .gitlab-ci.yml
stages:
- build
- test
- security
- package
- deploy-staging
- integration-tests
- deploy-production
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: "/certs"
REGISTRY: $CI_REGISTRY_IMAGE
COMPOSE_PROJECT_NAME: $CI_PROJECT_NAME-$CI_COMMIT_REF_SLUG
services:
- docker:20.10.16-dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
# Build Stage
build:
stage: build
script:
- |
# Build all services
docker-compose -f docker-compose.build.yml build
# Tag and push images
for service in frontend api-gateway user-service tenant-service billing-service analytics-service worker; do
docker tag ${COMPOSE_PROJECT_NAME}_${service}:latest $REGISTRY/$service:$CI_COMMIT_SHA
docker push $REGISTRY/$service:$CI_COMMIT_SHA
docker tag $REGISTRY/$service:$CI_COMMIT_SHA $REGISTRY/$service:latest
docker push $REGISTRY/$service:latest
done
# Test Stage
test:
stage: test
script:
- |
# Run unit tests
docker-compose -f docker-compose.test.yml up --build --abort-on-container-exit
# Run integration tests
docker-compose -f docker-compose.integration.yml up --build --abort-on-container-exit
artifacts:
reports:
junit: test-results.xml
coverage: coverage.xml
# Security Scanning
security-scan:
stage: security
script:
- |
# Scan images for vulnerabilities
for service in frontend api-gateway user-service tenant-service billing-service; do
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy:latest image --exit-code 1 --severity HIGH,CRITICAL \
$REGISTRY/$service:$CI_COMMIT_SHA
done
# SAST scanning
docker run --rm -v $PWD:/code \
registry.gitlab.com/gitlab-org/security-products/sast:latest /analyzer run
# Package Stage
package:
stage: package
script:
- |
# Create deployment package
envsubst < docker-compose.prod.template.yml > docker-compose.prod.yml
tar czf deployment-$CI_COMMIT_SHA.tar.gz \
docker-compose.prod.yml \
docker-compose.monitoring.yml \
traefik/ \
prometheus/ \
grafana/ \
scripts/
artifacts:
paths:
- deployment-$CI_COMMIT_SHA.tar.gz
expire_in: 1 week
# Staging Deployment
deploy-staging:
stage: deploy-staging
script:
- |
# Deploy to staging environment
export VERSION=$CI_COMMIT_SHA
export DOMAIN=staging.example.com
export REGISTRY=$CI_REGISTRY_IMAGE
# Update staging environment
docker-compose -f docker-compose.staging.yml down
docker-compose -f docker-compose.staging.yml pull
docker-compose -f docker-compose.staging.yml up -d
# Wait for services to be healthy
./scripts/wait-for-health.sh staging
environment:
name: staging
url: https://staging.example.com
only:
- develop
# Integration Tests
integration-tests:
stage: integration-tests
script:
- |
# Run end-to-end tests against staging
docker run --rm \
-e BASE_URL=https://staging.example.com \
-v $PWD/e2e:/tests \
cypress/included:latest
artifacts:
when: always
paths:
- e2e/screenshots/
- e2e/videos/
only:
- develop
# Production Deployment
deploy-production:
stage: deploy-production
script:
- |
# Blue-Green Deployment
export VERSION=$CI_COMMIT_SHA
export DOMAIN=example.com
export REGISTRY=$CI_REGISTRY_IMAGE
# Determine current and target environments
CURRENT=$(curl -s https://api.example.com/deployment/current)
TARGET=$([ "$CURRENT" = "blue" ] && echo "green" || echo "blue")
echo "Deploying to $TARGET environment"
# Deploy to target environment
docker-compose -f docker-compose.$TARGET.yml down
docker-compose -f docker-compose.$TARGET.yml pull
docker-compose -f docker-compose.$TARGET.yml up -d
# Health check
./scripts/wait-for-health.sh $TARGET
# Switch traffic
./scripts/switch-traffic.sh $TARGET
# Cleanup old environment after successful switch
sleep 300 # Wait 5 minutes
OTHER=$([ "$TARGET" = "blue" ] && echo "green" || echo "blue")
docker-compose -f docker-compose.$OTHER.yml down
environment:
name: production
url: https://example.com
when: manual
only:
- main
Deployment Scripts
scripts/wait-for-health.sh:
#!/bin/bash
ENVIRONMENT=$1
MAX_ATTEMPTS=30
ATTEMPT=0
echo "Waiting for $ENVIRONMENT environment to be healthy..."
while [ $ATTEMPT -lt $MAX_ATTEMPTS ]; do
HEALTHY=true
# Check each service health
for service in frontend api-gateway user-service tenant-service billing-service; do
if ! curl -f -s "http://$service-$ENVIRONMENT:8000/health" > /dev/null; then
HEALTHY=false
break
fi
done
if [ "$HEALTHY" = true ]; then
echo "All services are healthy!"
exit 0
fi
echo "Attempt $((ATTEMPT + 1))/$MAX_ATTEMPTS - Services not ready yet..."
sleep 10
ATTEMPT=$((ATTEMPT + 1))
done
echo "Health check failed after $MAX_ATTEMPTS attempts"
exit 1
scripts/switch-traffic.sh:
#!/bin/bash
TARGET=$1
echo "Switching traffic to $TARGET environment..."
# Update load balancer configuration
cat > /etc/nginx/conf.d/upstream.conf << EOF
upstream backend {
server app-$TARGET:8000;
}
EOF
# Reload nginx
nginx -s reload
# Update deployment marker
echo "$TARGET" > /var/www/html/deployment/current
echo "Traffic switched to $TARGET environment"
Project 3: Disaster Recovery and Backup System
# docker-compose.backup.yml
version: '3.8'
services:
# Backup Orchestrator
backup-orchestrator:
build: ./backup
environment:
- BACKUP_SCHEDULE=0 2 * * *
- S3_BUCKET=${BACKUP_S3_BUCKET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- ENCRYPTION_KEY_FILE=/run/secrets/backup_encryption_key
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- backup_temp:/tmp/backup
secrets:
- backup_encryption_key
networks:
- backup
# Database Backup
postgres-backup:
image: postgres:14
environment:
- PGPASSWORD_FILE=/run/secrets/postgres_password
volumes:
- backup_temp:/backup
- ./scripts/postgres-backup.sh:/backup.sh
secrets:
- postgres_password
networks:
- backup
- database
profiles:
- backup
command: /backup.sh
# Volume Backup
volume-backup:
image: alpine
volumes:
- redis_data:/source/redis:ro
- clickhouse_data:/source/clickhouse:ro
- backup_temp:/backup
profiles:
- backup
command: |
sh -c "
tar czf /backup/volumes_$(date +%Y%m%d_%H%M%S).tar.gz -C /source .
echo 'Volume backup completed'
"
# Disaster Recovery Test
dr-test:
build: ./dr-test
environment:
- TEST_ENVIRONMENT=dr-test
- RESTORE_FROM_BACKUP=latest
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- backup_temp:/backup:ro
profiles:
- disaster-recovery
command: |
sh -c "
echo 'Starting disaster recovery test...'
./restore-and-test.sh
"
networks:
backup:
driver: bridge
database:
external: true
volumes:
backup_temp:
redis_data:
external: true
clickhouse_data:
external: true
secrets:
backup_encryption_key:
external: true
postgres_password:
external: true
Summary
This comprehensive final section demonstrated:
Enterprise-Grade Implementation
- Multi-Tenant SaaS Platform: Complete production architecture with load balancing, service mesh, and multi-database setup
- Monitoring Stack: Full observability with Prometheus, Grafana, Jaeger, and ELK stack
- Security Integration: Secrets management, network isolation, and vulnerability scanning
DevOps Excellence
- CI/CD Pipeline: Complete GitLab CI pipeline with security scanning, testing, and blue-green deployment
- GitOps Workflow: Infrastructure as code with automated deployment and rollback capabilities
- Quality Gates: Comprehensive testing including unit, integration, and end-to-end tests
Operational Resilience
- Disaster Recovery: Automated backup systems with encryption and cloud storage
- High Availability: Database replication, service redundancy, and health monitoring
- Performance Optimization: Caching layers, connection pooling, and resource management
Key Achievements
Throughout this Docker Compose guide, you’ve mastered:
- Fundamental Concepts: Service orchestration, networking, and volume management
- Advanced Patterns: Service mesh integration, event-driven architecture, and CQRS
- Production Readiness: Security hardening, performance optimization, and monitoring
- Real-World Implementation: Complete enterprise systems with CI/CD and disaster recovery
Congratulations! You now have the expertise to design, implement, and operate production-grade Docker Compose applications. You can confidently tackle complex multi-service architectures, implement robust CI/CD pipelines, and ensure operational excellence in containerized environments.