Learn the essential concepts and practical skills needed to containerize applications using Docker, from basic commands to production-ready deployments.

Introduction and Setup

Introduction to Docker and Containerization

Docker revolutionizes application deployment by packaging applications and their dependencies into lightweight, portable containers that run consistently across different environments.

What is Docker?

Docker is a containerization platform that allows you to package applications with all their dependencies into portable containers. Think of containers as lightweight, standalone packages that include everything needed to run an application: code, runtime, system tools, libraries, and settings.

Traditional Deployment vs. Containerization

Traditional Deployment:
┌─────────────────────────────────────┐
│           Physical Server           │
├─────────────────────────────────────┤
│         Operating System            │
├─────────────────────────────────────┤
│    App A    │    App B    │  App C  │
│  (Python)   │   (Node.js) │ (Java)  │
│   Deps A    │    Deps B   │ Deps C  │
└─────────────────────────────────────┘
Problems: Dependency conflicts, "works on my machine"

Docker Containerization:
┌─────────────────────────────────────┐
│           Physical Server           │
├─────────────────────────────────────┤
│         Operating System            │
├─────────────────────────────────────┤
│            Docker Engine            │
├─────────────────────────────────────┤
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │Container│ │Container│ │Container│ │
│ │   A     │ │   B     │ │   C     │ │
│ │(Python) │ │(Node.js)│ │ (Java)  │ │
│ │ Deps A  │ │ Deps B  │ │ Deps C  │ │
│ └─────────┘ └─────────┘ └─────────┘ │
└─────────────────────────────────────┘
Benefits: Isolation, consistency, portability

Key Docker Concepts

Images

  • Docker Image: A read-only template used to create containers
  • Contains application code, dependencies, and configuration
  • Built in layers for efficiency
  • Stored in registries (Docker Hub, private registries)

Containers

  • Docker Container: A running instance of an image
  • Lightweight and isolated from other containers
  • Can be started, stopped, moved, and deleted
  • Share the host OS kernel

Dockerfile

  • Dockerfile: A text file with instructions to build an image
  • Defines the environment and steps to create your application image
  • Version-controlled and reproducible

Registry

  • Docker Registry: A service for storing and distributing images
  • Docker Hub is the default public registry
  • Can host private registries for internal use

Why Use Docker?

1. Consistency

# Same container runs identically everywhere
docker run myapp:latest
# Works on development, staging, and production

2. Portability

# Move containers between environments
docker save myapp:latest > myapp.tar
docker load < myapp.tar

3. Efficiency

# Lightweight compared to VMs
# Fast startup times
docker run -d nginx  # Starts in seconds

4. Scalability

# Easy horizontal scaling
docker run -d --name web1 myapp:latest
docker run -d --name web2 myapp:latest
docker run -d --name web3 myapp:latest

Installing Docker

Windows

Docker Desktop for Windows

  1. Download: Visit docker.com and download Docker Desktop

  2. System Requirements:

    • Windows 10 64-bit: Pro, Enterprise, or Education
    • Hyper-V and Containers Windows features enabled
    • BIOS-level hardware virtualization support
  3. Installation Steps:

    # Download and run Docker Desktop Installer.exe
    # Follow the installation wizard
    # Restart your computer when prompted
    
    # Verify installation
    docker --version
    docker run hello-world
    
# Enable WSL 2
wsl --install

# Set WSL 2 as default
wsl --set-default-version 2

# Install Ubuntu from Microsoft Store
# Configure Docker Desktop to use WSL 2 backend

macOS

Docker Desktop for Mac

  1. Download: Get Docker Desktop from docker.com

  2. System Requirements:

    • macOS 10.14 or newer
    • 4GB RAM minimum
    • VirtualBox prior to version 4.3.30 must be uninstalled
  3. Installation:

    # Download Docker.dmg
    # Drag Docker to Applications folder
    # Launch Docker from Applications
    
    # Verify installation
    docker --version
    docker run hello-world
    

Using Homebrew

# Install Docker Desktop via Homebrew
brew install --cask docker

# Start Docker Desktop
open /Applications/Docker.app

# Verify installation
docker --version

Linux

Ubuntu/Debian

# Update package index
sudo apt update

# Install prerequisites
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release

# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# Add Docker repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Update package index again
sudo apt update

# Install Docker Engine
sudo apt install docker-ce docker-ce-cli containerd.io

# Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker

# Add user to docker group (optional, for non-root usage)
sudo usermod -aG docker $USER
newgrp docker

# Verify installation
docker --version
docker run hello-world

CentOS/RHEL/Fedora

# Install required packages
sudo yum install -y yum-utils

# Add Docker repository
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

# Install Docker Engine
sudo yum install docker-ce docker-ce-cli containerd.io

# Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker

# Add user to docker group
sudo usermod -aG docker $USER

# Verify installation
docker --version
docker run hello-world

Docker Architecture

Docker Engine Components

┌─────────────────────────────────────┐
│            Docker Client            │
│         (docker command)            │
└─────────────┬───────────────────────┘
              │ REST API
┌─────────────▼───────────────────────┐
│          Docker Daemon              │
│         (dockerd process)           │
├─────────────────────────────────────┤
│  ┌─────────┐ ┌─────────┐ ┌────────┐ │
│  │Container│ │Container│ │ Image  │ │
│  │    1    │ │    2    │ │ Store  │ │
│  └─────────┘ └─────────┘ └────────┘ │
└─────────────────────────────────────┘

Key Components

  1. Docker Client: Command-line interface (CLI) that users interact with
  2. Docker Daemon: Background service that manages containers, images, networks
  3. Docker Registry: Stores and distributes Docker images
  4. Docker Objects: Images, containers, networks, volumes

Your First Docker Commands

Verify Installation

# Check Docker version
docker --version
# Output: Docker version 20.10.x, build xxxxx

# Check system information
docker info

# Test with hello-world
docker run hello-world

Basic Image Operations

# Search for images on Docker Hub
docker search nginx

# Pull an image from registry
docker pull nginx:latest

# List local images
docker images
# or
docker image ls

# Remove an image
docker rmi nginx:latest
# or
docker image rm nginx:latest

Basic Container Operations

# Run a container (interactive)
docker run -it ubuntu:latest /bin/bash

# Run a container (detached/background)
docker run -d --name my-nginx nginx:latest

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# Stop a container
docker stop my-nginx

# Start a stopped container
docker start my-nginx

# Remove a container
docker rm my-nginx

# Remove a running container (force)
docker rm -f my-nginx

Practical Examples

Example 1: Running a Web Server

# Run Nginx web server
docker run -d --name webserver -p 8080:80 nginx:latest

# Access the web server
# Open browser to http://localhost:8080

# View container logs
docker logs webserver

# Execute commands in running container
docker exec -it webserver /bin/bash

# Stop and remove
docker stop webserver
docker rm webserver

Example 2: Running a Database

# Run MySQL database
docker run -d \
  --name mysql-db \
  -e MYSQL_ROOT_PASSWORD=mypassword \
  -e MYSQL_DATABASE=testdb \
  -p 3306:3306 \
  mysql:8.0

# Connect to the database
docker exec -it mysql-db mysql -u root -p

# View database logs
docker logs mysql-db

# Stop and remove (with volume cleanup)
docker stop mysql-db
docker rm mysql-db

Example 3: Development Environment

# Run Python development environment
docker run -it \
  --name python-dev \
  -v $(pwd):/workspace \
  -w /workspace \
  python:3.9 \
  /bin/bash

# Inside the container, you can:
# pip install packages
# run Python scripts
# develop and test code

Docker Hub and Image Registry

Exploring Docker Hub

# Search for official images
docker search --filter is-official=true python

# Pull specific versions
docker pull python:3.9
docker pull python:3.9-slim
docker pull python:3.9-alpine

# View image details
docker inspect python:3.9

# View image history (layers)
docker history python:3.9

Understanding Image Tags

# Different ways to specify images
docker pull nginx                    # Latest tag (default)
docker pull nginx:latest            # Explicit latest
docker pull nginx:1.21              # Specific version
docker pull nginx:1.21-alpine       # Version with variant
docker pull nginx:stable            # Stable release

Container Lifecycle Management

Container States

┌─────────┐    docker run     ┌─────────┐
│ Created │ ──────────────────▶│ Running │
└─────────┘                   └─────────┘
     ▲                             │
     │                             │ docker stop
     │ docker create               ▼
     │                        ┌─────────┐
     └────────────────────────│ Stopped │
          docker start        └─────────┘
                                   │
                                   │ docker rm
                                   ▼
                              ┌─────────┐
                              │ Removed │
                              └─────────┘

Lifecycle Commands

# Create container without starting
docker create --name my-app nginx:latest

# Start created container
docker start my-app

# Restart running container
docker restart my-app

# Pause/unpause container
docker pause my-app
docker unpause my-app

# Kill container (force stop)
docker kill my-app

# Remove stopped container
docker rm my-app

# Remove running container (force)
docker rm -f my-app

Troubleshooting Common Issues

Permission Issues (Linux)

# If you get permission denied errors
sudo usermod -aG docker $USER
newgrp docker

# Or run with sudo (not recommended for regular use)
sudo docker run hello-world

Port Already in Use

# Check what's using the port
netstat -tulpn | grep :8080
# or
lsof -i :8080

# Use different port
docker run -p 8081:80 nginx

Container Won’t Start

# Check container logs
docker logs container-name

# Inspect container configuration
docker inspect container-name

# Run container interactively for debugging
docker run -it image-name /bin/bash

Disk Space Issues

# Clean up unused containers, images, networks
docker system prune

# Remove all stopped containers
docker container prune

# Remove unused images
docker image prune

# Remove unused volumes
docker volume prune

# See disk usage
docker system df

Best Practices for Beginners

1. Use Official Images

# Prefer official images
docker pull python:3.9        # ✓ Official
docker pull nginx:latest      # ✓ Official
# Avoid random user images for production

2. Specify Image Tags

# Avoid using 'latest' in production
docker pull python:3.9        # ✓ Specific version
docker pull python:latest     # ✗ Unpredictable

3. Clean Up Regularly

# Regular cleanup
docker system prune -f

# Remove stopped containers
docker container prune -f

# Remove unused images
docker image prune -f

4. Use Meaningful Names

# Good container names
docker run --name web-server nginx
docker run --name mysql-db mysql:8.0

# Avoid auto-generated names
docker run nginx  # Gets random name like "silly_einstein"

Summary

In this introduction, you learned:

Core Concepts

  • What Docker is and why it’s useful
  • Key differences between containers and virtual machines
  • Docker architecture and components
  • Container lifecycle and states

Practical Skills

  • Installing Docker on different platforms
  • Basic Docker commands for images and containers
  • Running web servers and databases in containers
  • Troubleshooting common issues

Best Practices

  • Using official images
  • Specifying image tags
  • Regular cleanup
  • Meaningful naming conventions

Key Takeaways:

  • Containers provide consistency across environments
  • Docker images are templates, containers are running instances
  • Always specify image tags for predictable deployments
  • Regular cleanup prevents disk space issues

Next, we’ll dive deeper into Docker images, learn how to create custom images with Dockerfiles, and explore image optimization techniques.

Core Concepts and Fundamentals

Docker Images and Containers Deep Dive

Understanding Docker images and containers is fundamental to mastering containerization. This section covers everything from basic operations to advanced image management techniques.

Understanding Docker Images

Image Layers and Architecture

Docker images are built using a layered filesystem. Each instruction in a Dockerfile creates a new layer:

FROM ubuntu:20.04          # Layer 1: Base OS
RUN apt-get update         # Layer 2: Package updates
RUN apt-get install -y python3  # Layer 3: Python installation
COPY app.py /app/          # Layer 4: Application code
CMD ["python3", "/app/app.py"]  # Layer 5: Default command
# View image layers
docker history python:3.9

# Output shows each layer:
# IMAGE          CREATED BY                                      SIZE
# abc123...      /bin/sh -c #(nop) CMD ["python3"]             0B
# def456...      /bin/sh -c apt-get install -y python3         45MB
# ghi789...      /bin/sh -c apt-get update                      25MB
# jkl012...      /bin/sh -c #(nop) FROM ubuntu:20.04           72MB

Image Management Commands

# Pull images with specific tags
docker pull nginx:1.21-alpine
docker pull postgres:13.4
docker pull node:16-slim

# List all images
docker images
docker image ls

# List images with specific filters
docker images --filter "dangling=true"  # Untagged images
docker images --filter "before=nginx:latest"
docker images --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"

# Remove images
docker rmi nginx:1.21-alpine
docker image rm postgres:13.4

# Remove multiple images
docker rmi $(docker images -q)  # Remove all images
docker image prune              # Remove unused images
docker image prune -a           # Remove all unused images

Image Inspection and Analysis

# Detailed image information
docker inspect nginx:latest

# View image configuration
docker inspect nginx:latest --format='{{.Config.Cmd}}'
docker inspect nginx:latest --format='{{.Config.ExposedPorts}}'

# Check image size and layers
docker images nginx:latest
docker history nginx:latest --no-trunc

# Analyze image vulnerabilities (if Docker Scout is available)
docker scout cves nginx:latest

Working with Containers

Container Creation and Management

# Create container without starting
docker create --name my-web nginx:latest

# Run container with various options
docker run -d \
  --name production-web \
  --restart unless-stopped \
  -p 80:80 \
  -p 443:443 \
  -v /host/data:/var/www/html \
  -e NGINX_HOST=example.com \
  nginx:latest

# Interactive container with custom command
docker run -it --rm ubuntu:20.04 /bin/bash

# Run container with resource limits
docker run -d \
  --name limited-app \
  --memory=512m \
  --cpus=1.0 \
  --memory-swap=1g \
  nginx:latest

Container Lifecycle Operations

# Start/stop containers
docker start my-web
docker stop my-web
docker restart my-web

# Pause/unpause containers
docker pause my-web
docker unpause my-web

# Kill container (force stop)
docker kill my-web

# Remove containers
docker rm my-web
docker rm -f running-container  # Force remove running container

# Container cleanup
docker container prune          # Remove stopped containers
docker rm $(docker ps -aq)     # Remove all containers

Container Monitoring and Debugging

# View container logs
docker logs my-web
docker logs -f my-web           # Follow logs
docker logs --tail 50 my-web   # Last 50 lines
docker logs --since 2h my-web  # Logs from last 2 hours

# Monitor container resources
docker stats                    # All containers
docker stats my-web            # Specific container
docker stats --no-stream      # One-time snapshot

# Execute commands in running containers
docker exec -it my-web /bin/bash
docker exec my-web ls -la /var/log
docker exec -u root my-web apt-get update

# Copy files between host and container
docker cp file.txt my-web:/tmp/
docker cp my-web:/var/log/nginx/access.log ./

Creating Custom Docker Images

Writing Dockerfiles

# Multi-stage Python application
FROM python:3.9-slim as builder

# Set working directory
WORKDIR /app

# Install build dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    && rm -rf /var/lib/apt/lists/*

# Copy requirements and install Python packages
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt

# Production stage
FROM python:3.9-slim

# Create non-root user
RUN groupadd -r appuser && useradd -r -g appuser appuser

# Set working directory
WORKDIR /app

# Copy Python packages from builder stage
COPY --from=builder /root/.local /home/appuser/.local

# Copy application code
COPY --chown=appuser:appuser . .

# Switch to non-root user
USER appuser

# Set environment variables
ENV PATH=/home/appuser/.local/bin:$PATH
ENV PYTHONPATH=/app
ENV PYTHONUNBUFFERED=1

# Expose port
EXPOSE 8000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:8000/health || exit 1

# Default command
CMD ["python", "app.py"]

Dockerfile Best Practices

# Node.js application with best practices
FROM node:16-alpine

# Install security updates
RUN apk update && apk upgrade && apk add --no-cache dumb-init

# Create app directory
WORKDIR /usr/src/app

# Create non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

# Copy package files first (for better caching)
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production && npm cache clean --force

# Copy application code
COPY --chown=nextjs:nodejs . .

# Switch to non-root user
USER nextjs

# Expose port
EXPOSE 3000

# Use dumb-init for proper signal handling
ENTRYPOINT ["dumb-init", "--"]

# Start application
CMD ["node", "server.js"]

Building and Tagging Images

# Basic build
docker build -t my-app:latest .

# Build with specific Dockerfile
docker build -f Dockerfile.prod -t my-app:prod .

# Build with build arguments
docker build \
  --build-arg NODE_ENV=production \
  --build-arg API_URL=https://api.example.com \
  -t my-app:v1.0.0 .

# Build with multiple tags
docker build -t my-app:latest -t my-app:v1.0.0 -t my-app:stable .

# Build with no cache
docker build --no-cache -t my-app:latest .

# Build with target stage (multi-stage builds)
docker build --target builder -t my-app:builder .

Advanced Container Operations

Container Networking Basics

# Run container with custom network settings
docker run -d \
  --name web-app \
  --hostname webapp \
  --add-host api.local:192.168.1.100 \
  -p 8080:80 \
  nginx:latest

# Run container with specific network
docker network create my-network
docker run -d --name app1 --network my-network nginx:latest
docker run -d --name app2 --network my-network nginx:latest

# Connect running container to network
docker network connect my-network existing-container

Volume Management

# Named volumes
docker volume create my-data
docker run -d --name db -v my-data:/var/lib/mysql mysql:8.0

# Bind mounts
docker run -d \
  --name web \
  -v /host/path:/container/path \
  -v /host/config:/etc/nginx/conf.d:ro \
  nginx:latest

# Temporary volumes
docker run -d --name app --tmpfs /tmp nginx:latest

# Volume inspection
docker volume ls
docker volume inspect my-data
docker volume prune  # Remove unused volumes

Environment Variables and Configuration

# Set environment variables
docker run -d \
  --name app \
  -e NODE_ENV=production \
  -e DATABASE_URL=postgresql://user:pass@db:5432/myapp \
  -e DEBUG=false \
  my-app:latest

# Load environment from file
echo "NODE_ENV=production" > .env
echo "PORT=3000" >> .env
docker run -d --name app --env-file .env my-app:latest

# Override default command
docker run -it my-app:latest /bin/bash
docker run my-app:latest npm test

Practical Examples

Example 1: Web Application with Database

# Create network for multi-container app
docker network create webapp-network

# Run PostgreSQL database
docker run -d \
  --name postgres-db \
  --network webapp-network \
  -e POSTGRES_DB=myapp \
  -e POSTGRES_USER=appuser \
  -e POSTGRES_PASSWORD=secretpass \
  -v postgres-data:/var/lib/postgresql/data \
  postgres:13

# Run web application
docker run -d \
  --name web-app \
  --network webapp-network \
  -p 8080:3000 \
  -e DATABASE_URL=postgresql://appuser:secretpass@postgres-db:5432/myapp \
  -e NODE_ENV=production \
  my-webapp:latest

# Run Redis cache
docker run -d \
  --name redis-cache \
  --network webapp-network \
  -v redis-data:/data \
  redis:6-alpine

# Check application logs
docker logs -f web-app

Example 2: Development Environment

# Dockerfile.dev
FROM node:16-alpine

WORKDIR /app

# Install nodemon for development
RUN npm install -g nodemon

# Copy package files
COPY package*.json ./
RUN npm install

# Copy source code
COPY . .

# Expose port
EXPOSE 3000

# Development command with hot reload
CMD ["nodemon", "server.js"]
# Build development image
docker build -f Dockerfile.dev -t my-app:dev .

# Run development container with volume mounting
docker run -d \
  --name dev-app \
  -p 3000:3000 \
  -v $(pwd):/app \
  -v /app/node_modules \
  my-app:dev

# View development logs
docker logs -f dev-app

Example 3: Multi-Container Application

# Frontend container
docker run -d \
  --name frontend \
  -p 80:80 \
  -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro \
  nginx:alpine

# Backend API container
docker run -d \
  --name backend \
  -p 3000:3000 \
  -e DATABASE_URL=postgresql://user:pass@database:5432/api \
  my-api:latest

# Database container
docker run -d \
  --name database \
  -e POSTGRES_DB=api \
  -e POSTGRES_USER=user \
  -e POSTGRES_PASSWORD=pass \
  -v db-data:/var/lib/postgresql/data \
  postgres:13

# Link containers (legacy approach, use networks instead)
docker run -d --name api --link database:db my-api:latest

Image Optimization Techniques

Multi-Stage Builds

# Build stage
FROM node:16 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Minimizing Image Size

# Use Alpine Linux for smaller base images
FROM python:3.9-alpine

# Combine RUN commands to reduce layers
RUN apk update && \
    apk add --no-cache gcc musl-dev && \
    pip install --no-cache-dir -r requirements.txt && \
    apk del gcc musl-dev

# Use .dockerignore to exclude unnecessary files
# .dockerignore content:
# node_modules
# .git
# *.md
# .env
# tests/

Caching Strategies

# Good: Copy package files first for better caching
COPY package*.json ./
RUN npm install
COPY . .

# Bad: Copy everything first (breaks cache on any file change)
COPY . .
RUN npm install

Container Security Basics

Running as Non-Root User

# Create and use non-root user
RUN groupadd -r appuser && useradd -r -g appuser appuser
USER appuser

# Or use numeric UID/GID
USER 1001:1001

Security Scanning

# Scan image for vulnerabilities
docker scout cves my-app:latest

# Use security-focused base images
FROM gcr.io/distroless/java:11  # Distroless images
FROM alpine:latest              # Minimal Alpine Linux

Troubleshooting Common Issues

Container Won’t Start

# Check container logs
docker logs container-name

# Run container interactively
docker run -it --entrypoint /bin/bash my-app:latest

# Check container configuration
docker inspect container-name

Build Failures

# Build with verbose output
docker build --progress=plain -t my-app .

# Build specific stage for debugging
docker build --target builder -t debug-build .
docker run -it debug-build /bin/bash

Performance Issues

# Monitor resource usage
docker stats

# Check container processes
docker exec container-name ps aux

# Analyze image layers
docker history my-app:latest

Summary

In this section, you learned:

Image Management

  • Understanding Docker image layers and architecture
  • Pulling, listing, and removing images
  • Image inspection and analysis techniques
  • Building custom images with Dockerfiles

Container Operations

  • Creating and managing container lifecycle
  • Monitoring and debugging containers
  • Resource management and limits
  • Networking and volume basics

Best Practices

  • Multi-stage builds for optimization
  • Security considerations (non-root users)
  • Caching strategies for faster builds
  • Image size optimization techniques

Practical Skills

  • Building real-world applications
  • Multi-container setups
  • Development environment configuration
  • Troubleshooting common issues

Key Takeaways:

  • Images are templates, containers are running instances
  • Layer caching improves build performance
  • Always run containers as non-root users when possible
  • Use multi-stage builds to minimize production image size
  • Monitor container resources and logs for troubleshooting

Next, we’ll explore Docker networking, volumes, and how to connect multiple containers to build complex applications.

Practical Applications and Examples

Docker Networking and Storage

Understanding Docker networking and storage is crucial for building multi-container applications and managing persistent data. This section covers networking concepts, volume management, and practical multi-container scenarios.

Docker Networking Fundamentals

Network Types

Docker provides several network drivers for different use cases:

# List available networks
docker network ls

# Default networks:
# bridge    - Default network for containers
# host      - Use host's network stack
# none      - Disable networking

Bridge Networks (Default)

# Create custom bridge network
docker network create my-app-network

# Inspect network details
docker network inspect my-app-network

# Run containers on custom network
docker run -d --name web --network my-app-network nginx:latest
docker run -d --name api --network my-app-network node:16-alpine

# Containers can communicate using container names as hostnames
docker exec web ping api  # This works!

Host Networking

# Use host network (container shares host's network)
docker run -d --name web --network host nginx:latest

# Container binds directly to host ports
# No port mapping needed, but less isolation

Container Communication

# Create network for multi-container app
docker network create webapp-net

# Database container
docker run -d \
  --name postgres \
  --network webapp-net \
  -e POSTGRES_DB=myapp \
  -e POSTGRES_USER=user \
  -e POSTGRES_PASSWORD=pass \
  postgres:13

# Application container
docker run -d \
  --name app \
  --network webapp-net \
  -p 8080:3000 \
  -e DATABASE_URL=postgresql://user:pass@postgres:5432/myapp \
  my-app:latest

# Containers communicate using service names
# app can reach postgres using hostname "postgres"

Advanced Networking

Port Mapping and Exposure

# Map multiple ports
docker run -d \
  --name web-server \
  -p 80:80 \
  -p 443:443 \
  -p 8080:8080 \
  nginx:latest

# Map to specific host interface
docker run -d -p 127.0.0.1:8080:80 nginx:latest

# Map random host port
docker run -d -P nginx:latest  # Maps all exposed ports to random host ports

# Check port mappings
docker port web-server

Network Aliases and DNS

# Create network with custom DNS
docker network create \
  --driver bridge \
  --subnet=172.20.0.0/16 \
  --ip-range=172.20.240.0/20 \
  custom-net

# Run container with network alias
docker run -d \
  --name database \
  --network custom-net \
  --network-alias db \
  --network-alias postgres-db \
  postgres:13

# Other containers can reach it using any alias
docker run --rm --network custom-net alpine ping db
docker run --rm --network custom-net alpine ping postgres-db

Multi-Network Containers

# Create multiple networks
docker network create frontend-net
docker network create backend-net

# Connect container to multiple networks
docker run -d --name api --network backend-net my-api:latest
docker network connect frontend-net api

# Now api container is on both networks
docker inspect api --format='{{.NetworkSettings.Networks}}'

Docker Volumes and Storage

Volume Types

Docker provides three main storage options:

  1. Volumes - Managed by Docker, stored in Docker area
  2. Bind Mounts - Mount host directory into container
  3. tmpfs Mounts - Temporary filesystem in memory

Named Volumes

# Create named volume
docker volume create my-data

# List volumes
docker volume ls

# Inspect volume
docker volume inspect my-data

# Use volume in container
docker run -d \
  --name database \
  -v my-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=secret \
  mysql:8.0

# Volume persists even if container is removed
docker rm -f database
docker run -d --name new-db -v my-data:/var/lib/mysql mysql:8.0
# Data is still there!

Bind Mounts

# Mount host directory into container
docker run -d \
  --name web \
  -v /host/path/to/html:/usr/share/nginx/html:ro \
  -v /host/path/to/config:/etc/nginx/conf.d:ro \
  -p 80:80 \
  nginx:latest

# Development with live code reloading
docker run -d \
  --name dev-app \
  -v $(pwd):/app \
  -v /app/node_modules \
  -p 3000:3000 \
  node:16-alpine \
  npm run dev

Volume Management

# Backup volume data
docker run --rm \
  -v my-data:/data \
  -v $(pwd):/backup \
  alpine \
  tar czf /backup/backup.tar.gz -C /data .

# Restore volume data
docker run --rm \
  -v my-data:/data \
  -v $(pwd):/backup \
  alpine \
  tar xzf /backup/backup.tar.gz -C /data

# Copy data between volumes
docker run --rm \
  -v source-vol:/source:ro \
  -v dest-vol:/dest \
  alpine \
  cp -r /source/. /dest/

# Remove unused volumes
docker volume prune

Practical Multi-Container Applications

Example 1: WordPress with MySQL

# Create network
docker network create wordpress-net

# Create volumes
docker volume create mysql-data
docker volume create wordpress-data

# MySQL database
docker run -d \
  --name mysql \
  --network wordpress-net \
  -v mysql-data:/var/lib/mysql \
  -e MYSQL_ROOT_PASSWORD=rootpass \
  -e MYSQL_DATABASE=wordpress \
  -e MYSQL_USER=wpuser \
  -e MYSQL_PASSWORD=wppass \
  mysql:8.0

# WordPress application
docker run -d \
  --name wordpress \
  --network wordpress-net \
  -v wordpress-data:/var/www/html \
  -p 8080:80 \
  -e WORDPRESS_DB_HOST=mysql:3306 \
  -e WORDPRESS_DB_NAME=wordpress \
  -e WORDPRESS_DB_USER=wpuser \
  -e WORDPRESS_DB_PASSWORD=wppass \
  wordpress:latest

# Access WordPress at http://localhost:8080

Example 2: MEAN Stack Application

# Create network
docker network create mean-stack

# MongoDB
docker run -d \
  --name mongodb \
  --network mean-stack \
  -v mongo-data:/data/db \
  -e MONGO_INITDB_ROOT_USERNAME=admin \
  -e MONGO_INITDB_ROOT_PASSWORD=secret \
  mongo:5.0

# Node.js API
docker run -d \
  --name api \
  --network mean-stack \
  -e MONGODB_URI=mongodb://admin:secret@mongodb:27017/myapp?authSource=admin \
  -e NODE_ENV=production \
  my-api:latest

# Angular Frontend
docker run -d \
  --name frontend \
  --network mean-stack \
  -p 80:80 \
  -e API_URL=http://localhost:3000 \
  my-frontend:latest

# Nginx Reverse Proxy
cat > nginx.conf << EOF
upstream api {
    server api:3000;
}

server {
    listen 80;
    
    location /api/ {
        proxy_pass http://api/;
        proxy_set_header Host \$host;
        proxy_set_header X-Real-IP \$remote_addr;
    }
    
    location / {
        proxy_pass http://frontend:80/;
        proxy_set_header Host \$host;
    }
}
EOF

docker run -d \
  --name proxy \
  --network mean-stack \
  -p 8080:80 \
  -v $(pwd)/nginx.conf:/etc/nginx/conf.d/default.conf:ro \
  nginx:alpine

Example 3: Development Environment

# Create development network
docker network create dev-env

# PostgreSQL for development
docker run -d \
  --name dev-postgres \
  --network dev-env \
  -v postgres-dev-data:/var/lib/postgresql/data \
  -e POSTGRES_DB=devdb \
  -e POSTGRES_USER=dev \
  -e POSTGRES_PASSWORD=devpass \
  -p 5432:5432 \
  postgres:13

# Redis for caching
docker run -d \
  --name dev-redis \
  --network dev-env \
  -v redis-dev-data:/data \
  -p 6379:6379 \
  redis:6-alpine

# Application with hot reload
docker run -d \
  --name dev-app \
  --network dev-env \
  -v $(pwd):/app \
  -v /app/node_modules \
  -p 3000:3000 \
  -e DATABASE_URL=postgresql://dev:devpass@dev-postgres:5432/devdb \
  -e REDIS_URL=redis://dev-redis:6379 \
  -e NODE_ENV=development \
  node:16-alpine \
  npm run dev

# Development tools container
docker run -it --rm \
  --name dev-tools \
  --network dev-env \
  -v $(pwd):/workspace \
  -w /workspace \
  node:16-alpine \
  /bin/sh

Container Orchestration Basics

Docker Compose Preview

While we’ll cover Docker Compose in detail later, here’s a preview of how it simplifies multi-container management:

# docker-compose.yml
version: '3.8'

services:
  web:
    image: nginx:latest
    ports:
      - "80:80"
    volumes:
      - ./html:/usr/share/nginx/html:ro
    depends_on:
      - api

  api:
    build: ./api
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/myapp
    depends_on:
      - db

  db:
    image: postgres:13
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  postgres_data:

networks:
  default:
    driver: bridge
# Start entire stack
docker-compose up -d

# Scale services
docker-compose up -d --scale api=3

# View logs
docker-compose logs -f

# Stop stack
docker-compose down

Monitoring and Logging

Container Logs

# View logs from multiple containers
docker logs web-server
docker logs api-server
docker logs database

# Follow logs in real-time
docker logs -f --tail 100 web-server

# Aggregate logs from multiple containers
docker logs web-server 2>&1 | grep ERROR &
docker logs api-server 2>&1 | grep ERROR &
docker logs database 2>&1 | grep ERROR &

Health Checks

# Add health check to Dockerfile
FROM nginx:alpine

# Copy custom nginx config
COPY nginx.conf /etc/nginx/nginx.conf

# Add health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost/health || exit 1

EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
# Run container with health check
docker run -d --name web-with-health my-nginx:latest

# Check health status
docker ps  # Shows health status
docker inspect web-with-health --format='{{.State.Health.Status}}'

Resource Monitoring

# Monitor resource usage
docker stats

# Monitor specific containers
docker stats web-server api-server database

# Get resource usage in JSON format
docker stats --no-stream --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}"

Security Best Practices

Network Security

# Create isolated networks
docker network create --internal backend-net
docker network create frontend-net

# Backend services (no external access)
docker run -d --name database --network backend-net postgres:13
docker run -d --name cache --network backend-net redis:alpine

# API service (connected to both networks)
docker run -d --name api --network backend-net my-api:latest
docker network connect frontend-net api

# Frontend (only on frontend network)
docker run -d --name web --network frontend-net -p 80:80 nginx:latest

Volume Security

# Read-only volumes
docker run -d \
  --name web \
  -v $(pwd)/config:/etc/nginx/conf.d:ro \
  -v $(pwd)/html:/usr/share/nginx/html:ro \
  nginx:latest

# Restrict volume permissions
docker run -d \
  --name app \
  -v app-data:/data \
  --user 1000:1000 \
  my-app:latest

Troubleshooting Network and Storage Issues

Network Debugging

# Test container connectivity
docker exec container1 ping container2
docker exec container1 nslookup container2
docker exec container1 telnet container2 80

# Check network configuration
docker network inspect my-network
docker exec container ip addr show
docker exec container netstat -tlnp

# Debug DNS resolution
docker exec container cat /etc/resolv.conf
docker exec container nslookup google.com

Volume Debugging

# Check volume mounts
docker inspect container --format='{{.Mounts}}'

# Verify volume contents
docker exec container ls -la /data
docker exec container df -h

# Check volume permissions
docker exec container ls -la /data
docker exec -u root container chown -R appuser:appuser /data

Common Issues and Solutions

# Port already in use
netstat -tlnp | grep :8080
docker ps --filter "publish=8080"

# Volume permission issues
docker exec -u root container chown -R $(id -u):$(id -g) /data

# Network connectivity issues
docker network ls
docker network inspect bridge
docker exec container ping 8.8.8.8  # Test external connectivity

Performance Optimization

Network Performance

# Use host networking for high-performance applications
docker run -d --network host high-performance-app:latest

# Optimize bridge network settings
docker network create \
  --driver bridge \
  --opt com.docker.network.bridge.enable_icc=true \
  --opt com.docker.network.bridge.enable_ip_masquerade=true \
  optimized-net

Storage Performance

# Use tmpfs for temporary data
docker run -d \
  --name fast-app \
  --tmpfs /tmp:rw,noexec,nosuid,size=100m \
  my-app:latest

# Optimize volume drivers
docker volume create \
  --driver local \
  --opt type=tmpfs \
  --opt device=tmpfs \
  --opt o=size=1g \
  fast-volume

Summary

In this section, you learned:

Networking Concepts

  • Docker network types and drivers
  • Container communication and DNS
  • Port mapping and exposure
  • Multi-network container setups

Storage Management

  • Volume types: named volumes, bind mounts, tmpfs
  • Volume lifecycle and data persistence
  • Backup and restore strategies
  • Security considerations for storage

Practical Applications

  • Multi-container application architectures
  • Development environment setup
  • Monitoring and logging strategies
  • Health checks and resource monitoring

Best Practices

  • Network isolation and security
  • Volume permissions and access control
  • Performance optimization techniques
  • Troubleshooting common issues

Key Takeaways:

  • Use custom networks for multi-container applications
  • Named volumes provide data persistence and portability
  • Always consider security when designing network topology
  • Monitor resource usage and implement health checks
  • Use bind mounts for development, volumes for production

Next, we’ll explore Docker Compose for managing multi-container applications and advanced deployment patterns.

Advanced Techniques and Patterns

Docker Compose and Multi-Container Applications

Docker Compose simplifies the management of multi-container applications by allowing you to define and run complex applications using a single YAML file. This section covers Compose fundamentals, advanced patterns, and real-world application examples.

Introduction to Docker Compose

What is Docker Compose?

Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services, networks, and volumes, then create and start all services with a single command.

Installing Docker Compose

# Docker Compose comes with Docker Desktop
# For Linux, install separately:
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

# Verify installation
docker-compose --version

Basic Compose File Structure

# docker-compose.yml
version: '3.8'

services:
  web:
    image: nginx:latest
    ports:
      - "80:80"
    volumes:
      - ./html:/usr/share/nginx/html:ro
    
  database:
    image: postgres:13
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=secret
    volumes:
      - postgres_data:/var/lib/postgresql/data

volumes:
  postgres_data:

networks:
  default:
    driver: bridge

Compose File Deep Dive

Service Configuration

version: '3.8'

services:
  web:
    # Build from Dockerfile
    build:
      context: ./web
      dockerfile: Dockerfile.prod
      args:
        - NODE_ENV=production
        - API_URL=http://api:3000
    
    # Or use pre-built image
    image: my-web-app:latest
    
    # Container name
    container_name: web-server
    
    # Restart policy
    restart: unless-stopped
    
    # Port mapping
    ports:
      - "80:80"
      - "443:443"
    
    # Environment variables
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgresql://user:pass@db:5432/myapp
    
    # Environment file
    env_file:
      - .env
      - .env.production
    
    # Volume mounts
    volumes:
      - ./config:/etc/nginx/conf.d:ro
      - web_data:/var/www/html
      - /var/run/docker.sock:/var/run/docker.sock:ro
    
    # Network configuration
    networks:
      - frontend
      - backend
    
    # Dependencies
    depends_on:
      - database
      - cache
    
    # Resource limits
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 256M
    
    # Health check
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

Advanced Service Options

services:
  app:
    build: .
    
    # Command override
    command: ["npm", "run", "start:prod"]
    
    # Working directory
    working_dir: /app
    
    # User specification
    user: "1000:1000"
    
    # Hostname
    hostname: app-server
    
    # DNS configuration
    dns:
      - 8.8.8.8
      - 8.8.4.4
    
    # Extra hosts
    extra_hosts:
      - "api.local:192.168.1.100"
      - "db.local:192.168.1.101"
    
    # Security options
    security_opt:
      - no-new-privileges:true
    
    # Capabilities
    cap_add:
      - NET_ADMIN
    cap_drop:
      - ALL
    
    # Logging configuration
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

Real-World Application Examples

Example 1: Full-Stack Web Application

# docker-compose.yml
version: '3.8'

services:
  # Frontend (React)
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile.prod
    container_name: react-frontend
    ports:
      - "80:80"
    depends_on:
      - backend
    networks:
      - frontend-net
    restart: unless-stopped

  # Backend API (Node.js)
  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
    container_name: node-backend
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgresql://user:password@postgres:5432/myapp
      - REDIS_URL=redis://redis:6379
      - JWT_SECRET=${JWT_SECRET}
    depends_on:
      - postgres
      - redis
    networks:
      - frontend-net
      - backend-net
    volumes:
      - ./uploads:/app/uploads
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  # Database (PostgreSQL)
  postgres:
    image: postgres:13-alpine
    container_name: postgres-db
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=password
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    networks:
      - backend-net
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
      interval: 30s
      timeout: 10s
      retries: 5

  # Cache (Redis)
  redis:
    image: redis:6-alpine
    container_name: redis-cache
    command: redis-server --appendonly yes
    volumes:
      - redis_data:/data
    networks:
      - backend-net
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 30s
      timeout: 10s
      retries: 3

  # Reverse Proxy (Nginx)
  nginx:
    image: nginx:alpine
    container_name: nginx-proxy
    ports:
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
    depends_on:
      - frontend
      - backend
    networks:
      - frontend-net
    restart: unless-stopped

volumes:
  postgres_data:
  redis_data:

networks:
  frontend-net:
    driver: bridge
  backend-net:
    driver: bridge

Example 2: Microservices Architecture

# docker-compose.microservices.yml
version: '3.8'

services:
  # API Gateway
  gateway:
    build: ./gateway
    ports:
      - "80:8080"
    environment:
      - USER_SERVICE_URL=http://user-service:3000
      - ORDER_SERVICE_URL=http://order-service:3000
      - PRODUCT_SERVICE_URL=http://product-service:3000
    depends_on:
      - user-service
      - order-service
      - product-service
    networks:
      - microservices-net

  # User Service
  user-service:
    build: ./services/user
    environment:
      - DATABASE_URL=postgresql://user:pass@user-db:5432/users
      - REDIS_URL=redis://redis:6379
    depends_on:
      - user-db
      - redis
    networks:
      - microservices-net
      - user-db-net
    deploy:
      replicas: 2

  user-db:
    image: postgres:13-alpine
    environment:
      - POSTGRES_DB=users
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
    volumes:
      - user_db_data:/var/lib/postgresql/data
    networks:
      - user-db-net

  # Order Service
  order-service:
    build: ./services/order
    environment:
      - DATABASE_URL=postgresql://order:pass@order-db:5432/orders
      - USER_SERVICE_URL=http://user-service:3000
    depends_on:
      - order-db
    networks:
      - microservices-net
      - order-db-net

  order-db:
    image: postgres:13-alpine
    environment:
      - POSTGRES_DB=orders
      - POSTGRES_USER=order
      - POSTGRES_PASSWORD=pass
    volumes:
      - order_db_data:/var/lib/postgresql/data
    networks:
      - order-db-net

  # Product Service
  product-service:
    build: ./services/product
    environment:
      - MONGODB_URI=mongodb://product-db:27017/products
    depends_on:
      - product-db
    networks:
      - microservices-net
      - product-db-net

  product-db:
    image: mongo:5.0
    volumes:
      - product_db_data:/data/db
    networks:
      - product-db-net

  # Shared Redis Cache
  redis:
    image: redis:6-alpine
    volumes:
      - redis_data:/data
    networks:
      - microservices-net

  # Message Queue
  rabbitmq:
    image: rabbitmq:3-management
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=secret
    ports:
      - "15672:15672"  # Management UI
    volumes:
      - rabbitmq_data:/var/lib/rabbitmq
    networks:
      - microservices-net

volumes:
  user_db_data:
  order_db_data:
  product_db_data:
  redis_data:
  rabbitmq_data:

networks:
  microservices-net:
    driver: bridge
  user-db-net:
    driver: bridge
    internal: true
  order-db-net:
    driver: bridge
    internal: true
  product-db-net:
    driver: bridge
    internal: true

Example 3: Development Environment

# docker-compose.dev.yml
version: '3.8'

services:
  # Development API with hot reload
  api:
    build:
      context: ./api
      dockerfile: Dockerfile.dev
    volumes:
      - ./api:/app
      - /app/node_modules
    ports:
      - "3000:3000"
      - "9229:9229"  # Debug port
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgresql://dev:dev@postgres:5432/devdb
      - REDIS_URL=redis://redis:6379
    depends_on:
      - postgres
      - redis
    command: npm run dev:debug

  # Frontend with hot reload
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile.dev
    volumes:
      - ./frontend:/app
      - /app/node_modules
    ports:
      - "3001:3000"
    environment:
      - REACT_APP_API_URL=http://localhost:3000
      - CHOKIDAR_USEPOLLING=true
    command: npm start

  # Development database
  postgres:
    image: postgres:13-alpine
    environment:
      - POSTGRES_DB=devdb
      - POSTGRES_USER=dev
      - POSTGRES_PASSWORD=dev
    ports:
      - "5432:5432"  # Expose for external tools
    volumes:
      - postgres_dev_data:/var/lib/postgresql/data
      - ./database/init:/docker-entrypoint-initdb.d

  # Development Redis
  redis:
    image: redis:6-alpine
    ports:
      - "6379:6379"
    volumes:
      - redis_dev_data:/data

  # Development tools
  adminer:
    image: adminer:latest
    ports:
      - "8080:8080"
    depends_on:
      - postgres

  redis-commander:
    image: rediscommander/redis-commander:latest
    environment:
      - REDIS_HOSTS=local:redis:6379
    ports:
      - "8081:8081"
    depends_on:
      - redis

volumes:
  postgres_dev_data:
  redis_dev_data:

Docker Compose Commands

Basic Operations

# Start services
docker-compose up
docker-compose up -d  # Detached mode

# Start specific services
docker-compose up web database

# Build and start
docker-compose up --build

# Stop services
docker-compose stop
docker-compose down  # Stop and remove containers

# Stop and remove everything (including volumes)
docker-compose down -v

# Restart services
docker-compose restart
docker-compose restart web  # Restart specific service

Service Management

# Scale services
docker-compose up -d --scale web=3 --scale api=2

# View running services
docker-compose ps

# View logs
docker-compose logs
docker-compose logs -f web  # Follow logs for specific service
docker-compose logs --tail=100 api

# Execute commands in services
docker-compose exec web /bin/bash
docker-compose exec database psql -U user -d myapp

# Run one-off commands
docker-compose run --rm web npm test
docker-compose run --rm database pg_dump -U user myapp > backup.sql

Configuration Management

# Validate compose file
docker-compose config

# View resolved configuration
docker-compose config --services
docker-compose config --volumes

# Use different compose files
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up

# Override with environment-specific files
docker-compose -f docker-compose.yml -f docker-compose.override.yml up

Environment-Specific Configurations

Development Override

# docker-compose.override.yml (automatically loaded)
version: '3.8'

services:
  web:
    build:
      target: development
    volumes:
      - ./src:/app/src
    environment:
      - NODE_ENV=development
      - DEBUG=true
    ports:
      - "3000:3000"
      - "9229:9229"  # Debug port

  database:
    ports:
      - "5432:5432"  # Expose for development tools
    environment:
      - POSTGRES_DB=devdb

Production Configuration

# docker-compose.prod.yml
version: '3.8'

services:
  web:
    build:
      target: production
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

  database:
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 2G
    volumes:
      - /opt/postgres/data:/var/lib/postgresql/data

Using Environment Files

# .env file
POSTGRES_USER=myuser
POSTGRES_PASSWORD=mypassword
POSTGRES_DB=myapp
API_KEY=your-api-key-here
NODE_ENV=production
# docker-compose.yml
version: '3.8'

services:
  web:
    image: my-app:latest
    environment:
      - NODE_ENV=${NODE_ENV}
      - API_KEY=${API_KEY}
    
  database:
    image: postgres:13
    environment:
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}

Advanced Compose Patterns

Health Checks and Dependencies

services:
  database:
    image: postgres:13
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user"]
      interval: 30s
      timeout: 10s
      retries: 5
      start_period: 30s

  web:
    image: my-app:latest
    depends_on:
      database:
        condition: service_healthy
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

Secrets Management

# docker-compose.yml
version: '3.8'

services:
  web:
    image: my-app:latest
    secrets:
      - db_password
      - api_key
    environment:
      - DB_PASSWORD_FILE=/run/secrets/db_password
      - API_KEY_FILE=/run/secrets/api_key

secrets:
  db_password:
    file: ./secrets/db_password.txt
  api_key:
    external: true

Multi-Stage Builds with Compose

# Dockerfile
FROM node:16-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

FROM node:16-alpine AS development
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]

FROM node:16-alpine AS production
WORKDIR /app
COPY --from=builder /app/node_modules ./node_modules
COPY . .
CMD ["npm", "start"]
# docker-compose.yml
services:
  web:
    build:
      context: .
      target: ${BUILD_TARGET:-production}

Monitoring and Logging

Centralized Logging

# docker-compose.logging.yml
version: '3.8'

services:
  web:
    image: my-app:latest
    logging:
      driver: "fluentd"
      options:
        fluentd-address: localhost:24224
        tag: web.app

  elasticsearch:
    image: elasticsearch:7.14.0
    environment:
      - discovery.type=single-node
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data

  fluentd:
    build: ./fluentd
    volumes:
      - ./fluentd/conf:/fluentd/etc
    ports:
      - "24224:24224"
    depends_on:
      - elasticsearch

  kibana:
    image: kibana:7.14.0
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch

volumes:
  elasticsearch_data:

Monitoring Stack

# docker-compose.monitoring.yml
version: '3.8'

services:
  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9090:9090"
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/dashboards:/etc/grafana/provisioning/dashboards
      - ./grafana/datasources:/etc/grafana/provisioning/datasources

  node-exporter:
    image: prom/node-exporter:latest
    ports:
      - "9100:9100"

volumes:
  prometheus_data:
  grafana_data:

Best Practices

Compose File Organization

# Project structure
project/
├── docker-compose.yml          # Base configuration
├── docker-compose.override.yml # Development overrides
├── docker-compose.prod.yml     # Production configuration
├── docker-compose.test.yml     # Testing configuration
├── .env                        # Environment variables
├── .env.example               # Environment template
└── services/
    ├── web/
    │   ├── Dockerfile
    │   └── src/
    ├── api/
    │   ├── Dockerfile
    │   └── src/
    └── database/
        └── init/

Security Best Practices

services:
  web:
    image: my-app:latest
    user: "1000:1000"  # Non-root user
    read_only: true     # Read-only filesystem
    tmpfs:
      - /tmp
      - /var/cache
    security_opt:
      - no-new-privileges:true
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE

Summary

In this section, you learned:

Docker Compose Fundamentals

  • Compose file structure and syntax
  • Service configuration options
  • Volume and network management
  • Environment-specific configurations

Real-World Applications

  • Full-stack web application setup
  • Microservices architecture patterns
  • Development environment configuration
  • Production deployment considerations

Advanced Patterns

  • Health checks and service dependencies
  • Secrets management
  • Multi-stage builds with Compose
  • Monitoring and logging integration

Best Practices

  • Project organization and file structure
  • Security configurations
  • Environment management
  • Service scaling and resource limits

Key Takeaways:

  • Compose simplifies multi-container application management
  • Use override files for environment-specific configurations
  • Implement health checks for reliable service dependencies
  • Always consider security when configuring services
  • Monitor and log your applications for production readiness

Next, we’ll explore production deployment strategies, security best practices, and performance optimization techniques for Docker applications.

Best Practices and Optimization

Production Docker: Security, Performance, and Best Practices

This final section covers everything you need to know about running Docker in production environments, including security hardening, performance optimization, monitoring, and operational best practices.

Production Security Best Practices

Container Security Fundamentals

# Secure Dockerfile example
FROM node:16-alpine AS builder

# Create app directory
WORKDIR /usr/src/app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production && npm cache clean --force

# Production stage
FROM node:16-alpine

# Install security updates
RUN apk update && apk upgrade && apk add --no-cache dumb-init

# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nextjs -u 1001

# Set working directory
WORKDIR /usr/src/app

# Copy dependencies from builder stage
COPY --from=builder /usr/src/app/node_modules ./node_modules

# Copy application code
COPY --chown=nextjs:nodejs . .

# Switch to non-root user
USER nextjs

# Expose port
EXPOSE 3000

# Use dumb-init for proper signal handling
ENTRYPOINT ["dumb-init", "--"]

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD node healthcheck.js

# Start application
CMD ["node", "server.js"]

Runtime Security Configuration

# Run containers with security options
docker run -d \
  --name secure-app \
  --user 1001:1001 \
  --read-only \
  --tmpfs /tmp \
  --tmpfs /var/cache \
  --security-opt no-new-privileges:true \
  --cap-drop ALL \
  --cap-add NET_BIND_SERVICE \
  --memory 512m \
  --cpus 1.0 \
  my-app:latest

# Use Docker secrets for sensitive data
echo "my-secret-password" | docker secret create db_password -
docker service create \
  --name web \
  --secret db_password \
  my-app:latest

Image Security Scanning

# Scan images for vulnerabilities
docker scout cves my-app:latest

# Use Trivy for comprehensive scanning
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
  aquasec/trivy:latest image my-app:latest

# Scan during build process
docker build -t my-app:latest .
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
  aquasec/trivy:latest image my-app:latest --exit-code 1

Secure Compose Configuration

# docker-compose.prod.yml
version: '3.8'

services:
  web:
    image: my-app:latest
    user: "1001:1001"
    read_only: true
    tmpfs:
      - /tmp:rw,noexec,nosuid,size=100m
      - /var/cache:rw,noexec,nosuid,size=50m
    security_opt:
      - no-new-privileges:true
    cap_drop:
      - ALL
    cap_add:
      - NET_BIND_SERVICE
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 512M
        reservations:
          cpus: '0.5'
          memory: 256M
    networks:
      - frontend
    secrets:
      - db_password
      - api_key

  database:
    image: postgres:13-alpine
    user: "999:999"
    read_only: true
    tmpfs:
      - /tmp
      - /var/run/postgresql
    security_opt:
      - no-new-privileges:true
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - DAC_OVERRIDE
      - FOWNER
      - SETGID
      - SETUID
    volumes:
      - postgres_data:/var/lib/postgresql/data
    networks:
      - backend
    secrets:
      - db_password
    environment:
      - POSTGRES_PASSWORD_FILE=/run/secrets/db_password

secrets:
  db_password:
    external: true
  api_key:
    external: true

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true

volumes:
  postgres_data:
    driver: local

Performance Optimization

Image Optimization

# Multi-stage build for minimal production image
FROM node:16-alpine AS dependencies
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

FROM node:16-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:16-alpine AS runtime
RUN apk add --no-cache dumb-init
WORKDIR /app
RUN addgroup -g 1001 -S nodejs && adduser -S nextjs -u 1001
COPY --from=dependencies /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
COPY --from=build /app/public ./public
USER nextjs
EXPOSE 3000
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]

Resource Management

# Resource-optimized compose file
version: '3.8'

services:
  web:
    image: my-app:latest
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 1G
        reservations:
          cpus: '1.0'
          memory: 512M
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
        window: 120s
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  database:
    image: postgres:13-alpine
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 2G
        reservations:
          cpus: '0.5'
          memory: 1G
    volumes:
      - postgres_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_SHARED_BUFFERS=256MB
      - POSTGRES_EFFECTIVE_CACHE_SIZE=1GB
      - POSTGRES_WORK_MEM=4MB

volumes:
  postgres_data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /opt/postgres/data

Caching Strategies

# Optimize Docker layer caching
FROM node:16-alpine

WORKDIR /app

# Copy package files first (better caching)
COPY package*.json ./
RUN npm ci --only=production

# Copy source code last
COPY . .

# Use .dockerignore to exclude unnecessary files
# .dockerignore
node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.nyc_output
coverage
.nyc_output
.coverage
.vscode
.idea
*.swp
*.swo
*~

Build Optimization

# Use BuildKit for faster builds
export DOCKER_BUILDKIT=1
docker build -t my-app:latest .

# Multi-platform builds
docker buildx create --use
docker buildx build --platform linux/amd64,linux/arm64 -t my-app:latest --push .

# Build with cache mounts
docker build \
  --build-arg BUILDKIT_INLINE_CACHE=1 \
  --cache-from my-app:cache \
  -t my-app:latest .

Monitoring and Logging

Application Monitoring

# docker-compose.monitoring.yml
version: '3.8'

services:
  # Application
  app:
    image: my-app:latest
    environment:
      - METRICS_ENABLED=true
      - METRICS_PORT=9090
    ports:
      - "3000:3000"
      - "9090:9090"  # Metrics endpoint

  # Prometheus
  prometheus:
    image: prom/prometheus:latest
    ports:
      - "9091:9090"
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
      - '--storage.tsdb.retention.time=200h'
      - '--web.enable-lifecycle'

  # Grafana
  grafana:
    image: grafana/grafana:latest
    ports:
      - "3001:3000"
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=admin123
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning

  # Node Exporter
  node-exporter:
    image: prom/node-exporter:latest
    ports:
      - "9100:9100"
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.rootfs=/rootfs'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'

  # cAdvisor
  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    ports:
      - "8080:8080"
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
      - /dev/disk/:/dev/disk:ro

volumes:
  prometheus_data:
  grafana_data:

Centralized Logging

# docker-compose.logging.yml
version: '3.8'

services:
  # Application with structured logging
  app:
    image: my-app:latest
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
        labels: "service=app,environment=production"
    environment:
      - LOG_LEVEL=info
      - LOG_FORMAT=json

  # ELK Stack
  elasticsearch:
    image: elasticsearch:7.14.0
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
      - elasticsearch_data:/usr/share/elasticsearch/data
    ports:
      - "9200:9200"

  logstash:
    image: logstash:7.14.0
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - ./logstash/pipeline:/usr/share/logstash/pipeline:ro
    ports:
      - "5044:5044"
    depends_on:
      - elasticsearch

  kibana:
    image: kibana:7.14.0
    ports:
      - "5601:5601"
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
    depends_on:
      - elasticsearch

  # Filebeat for log shipping
  filebeat:
    image: elastic/filebeat:7.14.0
    user: root
    volumes:
      - ./filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
    depends_on:
      - logstash

volumes:
  elasticsearch_data:

High Availability and Scaling

Load Balancing

# docker-compose.ha.yml
version: '3.8'

services:
  # Load Balancer
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
    depends_on:
      - app
    deploy:
      replicas: 2

  # Application (multiple instances)
  app:
    image: my-app:latest
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
        failure_action: rollback
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
    environment:
      - NODE_ENV=production
    depends_on:
      - database
      - redis

  # Database with replication
  database:
    image: postgres:13-alpine
    environment:
      - POSTGRES_REPLICATION_MODE=master
      - POSTGRES_REPLICATION_USER=replicator
      - POSTGRES_REPLICATION_PASSWORD=replicator_password
    volumes:
      - postgres_master_data:/var/lib/postgresql/data

  database-replica:
    image: postgres:13-alpine
    environment:
      - POSTGRES_REPLICATION_MODE=slave
      - POSTGRES_REPLICATION_USER=replicator
      - POSTGRES_REPLICATION_PASSWORD=replicator_password
      - POSTGRES_MASTER_HOST=database
    depends_on:
      - database

  # Redis Cluster
  redis:
    image: redis:6-alpine
    command: redis-server --appendonly yes --cluster-enabled yes
    deploy:
      replicas: 3

volumes:
  postgres_master_data:

Health Checks and Circuit Breakers

# Application with health check
FROM node:16-alpine

WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .

# Health check endpoint
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD node healthcheck.js

EXPOSE 3000
CMD ["node", "server.js"]
// healthcheck.js
const http = require('http');

const options = {
  hostname: 'localhost',
  port: 3000,
  path: '/health',
  method: 'GET',
  timeout: 2000
};

const req = http.request(options, (res) => {
  if (res.statusCode === 200) {
    process.exit(0);
  } else {
    process.exit(1);
  }
});

req.on('error', () => {
  process.exit(1);
});

req.on('timeout', () => {
  req.destroy();
  process.exit(1);
});

req.end();

Backup and Disaster Recovery

Database Backups

# Automated backup script
#!/bin/bash
BACKUP_DIR="/backups"
DATE=$(date +%Y%m%d_%H%M%S)
CONTAINER_NAME="postgres-db"

# Create backup
docker exec $CONTAINER_NAME pg_dump -U user -d myapp > $BACKUP_DIR/backup_$DATE.sql

# Compress backup
gzip $BACKUP_DIR/backup_$DATE.sql

# Remove old backups (keep last 7 days)
find $BACKUP_DIR -name "backup_*.sql.gz" -mtime +7 -delete

# Upload to S3 (optional)
aws s3 cp $BACKUP_DIR/backup_$DATE.sql.gz s3://my-backups/database/

Volume Backups

# Backup named volume
docker run --rm \
  -v postgres_data:/data:ro \
  -v $(pwd):/backup \
  alpine \
  tar czf /backup/postgres_backup_$(date +%Y%m%d).tar.gz -C /data .

# Restore volume
docker run --rm \
  -v postgres_data:/data \
  -v $(pwd):/backup \
  alpine \
  tar xzf /backup/postgres_backup_20231201.tar.gz -C /data

Disaster Recovery Plan

# docker-compose.dr.yml
version: '3.8'

services:
  # Primary application
  app-primary:
    image: my-app:latest
    environment:
      - DATABASE_URL=postgresql://user:pass@db-primary:5432/myapp
      - REDIS_URL=redis://redis-primary:6379
    depends_on:
      - db-primary
      - redis-primary

  # Standby application
  app-standby:
    image: my-app:latest
    environment:
      - DATABASE_URL=postgresql://user:pass@db-standby:5432/myapp
      - REDIS_URL=redis://redis-standby:6379
    depends_on:
      - db-standby
      - redis-standby
    profiles:
      - disaster-recovery

  # Database replication
  db-primary:
    image: postgres:13-alpine
    environment:
      - POSTGRES_REPLICATION_MODE=master
    volumes:
      - postgres_primary_data:/var/lib/postgresql/data

  db-standby:
    image: postgres:13-alpine
    environment:
      - POSTGRES_REPLICATION_MODE=slave
      - POSTGRES_MASTER_HOST=db-primary
    volumes:
      - postgres_standby_data:/var/lib/postgresql/data
    profiles:
      - disaster-recovery

volumes:
  postgres_primary_data:
  postgres_standby_data:

CI/CD Integration

GitLab CI Pipeline

# .gitlab-ci.yml
stages:
  - test
  - build
  - security
  - deploy

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: "/certs"

services:
  - docker:20.10.16-dind

before_script:
  - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY

test:
  stage: test
  script:
    - docker build -t $CI_PROJECT_NAME:test --target test .
    - docker run --rm $CI_PROJECT_NAME:test npm test

build:
  stage: build
  script:
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
    - docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA $CI_REGISTRY_IMAGE:latest
    - docker push $CI_REGISTRY_IMAGE:latest

security_scan:
  stage: security
  script:
    - docker run --rm -v /var/run/docker.sock:/var/run/docker.sock 
      aquasec/trivy:latest image --exit-code 1 $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

deploy_staging:
  stage: deploy
  script:
    - docker-compose -f docker-compose.staging.yml pull
    - docker-compose -f docker-compose.staging.yml up -d
  environment:
    name: staging
    url: https://staging.example.com
  only:
    - develop

deploy_production:
  stage: deploy
  script:
    - docker-compose -f docker-compose.prod.yml pull
    - docker-compose -f docker-compose.prod.yml up -d
  environment:
    name: production
    url: https://example.com
  only:
    - main
  when: manual

GitHub Actions

# .github/workflows/docker.yml
name: Docker Build and Deploy

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    - name: Build test image
      run: docker build -t test-image --target test .
    
    - name: Run tests
      run: docker run --rm test-image npm test

  build-and-push:
    runs-on: ubuntu-latest
    needs: test
    permissions:
      contents: read
      packages: write
    
    steps:
    - uses: actions/checkout@v3
    
    - name: Log in to Container Registry
      uses: docker/login-action@v2
      with:
        registry: ${{ env.REGISTRY }}
        username: ${{ github.actor }}
        password: ${{ secrets.GITHUB_TOKEN }}
    
    - name: Extract metadata
      id: meta
      uses: docker/metadata-action@v4
      with:
        images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
    
    - name: Build and push Docker image
      uses: docker/build-push-action@v4
      with:
        context: .
        push: true
        tags: ${{ steps.meta.outputs.tags }}
        labels: ${{ steps.meta.outputs.labels }}

  deploy:
    runs-on: ubuntu-latest
    needs: build-and-push
    if: github.ref == 'refs/heads/main'
    
    steps:
    - uses: actions/checkout@v3
    
    - name: Deploy to production
      run: |
        docker-compose -f docker-compose.prod.yml pull
        docker-compose -f docker-compose.prod.yml up -d

Troubleshooting Production Issues

Common Production Problems

# Container keeps restarting
docker logs container-name
docker inspect container-name --format='{{.State.ExitCode}}'
docker inspect container-name --format='{{.State.Error}}'

# High memory usage
docker stats --no-stream
docker exec container-name ps aux --sort=-%mem | head

# Network connectivity issues
docker exec container-name ping other-container
docker exec container-name nslookup service-name
docker network inspect network-name

# Volume permission issues
docker exec -u root container-name ls -la /data
docker exec -u root container-name chown -R appuser:appuser /data

# Performance issues
docker exec container-name top
docker exec container-name iostat -x 1
docker system df  # Check disk usage

Debugging Tools

# Debug container with tools
docker run -it --rm \
  --network container:target-container \
  --pid container:target-container \
  nicolaka/netshoot

# System resource monitoring
docker run --rm -it \
  --pid host \
  --privileged \
  alpine htop

# Container filesystem analysis
docker run --rm -it \
  -v /var/lib/docker:/var/lib/docker:ro \
  wagoodman/dive:latest image-name

Summary

In this comprehensive section, you learned:

Production Security

  • Container hardening techniques
  • Runtime security configurations
  • Image vulnerability scanning
  • Secrets management

Performance Optimization

  • Image size optimization strategies
  • Resource management and limits
  • Caching and build optimization
  • Multi-stage build patterns

Monitoring and Operations

  • Application monitoring with Prometheus/Grafana
  • Centralized logging with ELK stack
  • Health checks and circuit breakers
  • High availability patterns

DevOps Integration

  • CI/CD pipeline integration
  • Automated testing and security scanning
  • Backup and disaster recovery
  • Troubleshooting production issues

Key Takeaways:

  • Always run containers as non-root users in production
  • Implement comprehensive monitoring and logging
  • Use multi-stage builds to minimize image size
  • Automate security scanning in your CI/CD pipeline
  • Plan for disaster recovery and backup strategies
  • Monitor resource usage and set appropriate limits

Congratulations! You’ve completed the Docker Fundamentals guide. You now have the knowledge to:

  • Build and deploy containerized applications
  • Implement security best practices
  • Optimize performance for production workloads
  • Monitor and troubleshoot Docker applications
  • Integrate Docker into CI/CD pipelines

Continue your Docker journey by exploring Kubernetes for container orchestration, Docker Swarm for clustering, or specialized topics like serverless containers with AWS Fargate or Google Cloud Run.