Master Docker networking modes.

Introduction and Setup

Docker Networking and Storage: Introduction and Setup

Docker networking and storage are fundamental concepts for building scalable, production-ready containerized applications. This guide covers everything from basic concepts to advanced patterns for managing container connectivity and data persistence.

Docker Networking Fundamentals

Network Types Overview

Docker provides several network drivers:

┌─────────────────────────────────────────────────────────┐
│                    Docker Host                          │
├─────────────────────────────────────────────────────────┤
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐     │
│  │   Bridge    │  │    Host     │  │    None     │     │
│  │  Network    │  │  Network    │  │  Network    │     │
│  └─────────────┘  └─────────────┘  └─────────────┘     │
├─────────────────────────────────────────────────────────┤
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐     │
│  │   Overlay   │  │   Macvlan   │  │   Custom    │     │
│  │  Network    │  │  Network    │  │  Network    │     │
│  └─────────────┘  └─────────────┘  └─────────────┘     │
└─────────────────────────────────────────────────────────┘

Basic Network Commands

# List networks
docker network ls

# Inspect network
docker network inspect bridge

# Create custom network
docker network create mynetwork

# Create network with specific driver
docker network create --driver bridge mybridge

# Create network with custom subnet
docker network create --subnet=172.20.0.0/16 mysubnet

# Connect container to network
docker network connect mynetwork mycontainer

# Disconnect container from network
docker network disconnect mynetwork mycontainer

# Remove network
docker network rm mynetwork

Bridge Networks

Default bridge network:

# Run container on default bridge
docker run -d --name web nginx

# Run container with port mapping
docker run -d --name web -p 8080:80 nginx

# Inspect default bridge
docker network inspect bridge

Custom bridge networks:

# Create custom bridge
docker network create --driver bridge \
  --subnet=172.20.0.0/16 \
  --ip-range=172.20.240.0/20 \
  --gateway=172.20.0.1 \
  custom-bridge

# Run containers on custom bridge
docker run -d --name web --network custom-bridge nginx
docker run -d --name app --network custom-bridge alpine sleep 3600

# Test connectivity
docker exec app ping web  # Works with custom bridge

Docker Storage Fundamentals

Storage Types Overview

┌─────────────────────────────────────────────────────────┐
│                 Docker Storage                          │
├─────────────────────────────────────────────────────────┤
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐     │
│  │   Volumes   │  │ Bind Mounts │  │   tmpfs     │     │
│  │ (Managed by │  │ (Host Path) │  │ (Memory)    │     │
│  │   Docker)   │  │             │  │             │     │
│  └─────────────┘  └─────────────┘  └─────────────┘     │
└─────────────────────────────────────────────────────────┘

Volume Management

# List volumes
docker volume ls

# Create volume
docker volume create myvolume

# Create volume with driver options
docker volume create --driver local \
  --opt type=nfs \
  --opt o=addr=192.168.1.1,rw \
  --opt device=:/path/to/dir \
  nfs-volume

# Inspect volume
docker volume inspect myvolume

# Remove volume
docker volume rm myvolume

# Remove unused volumes
docker volume prune

Using Volumes

# Mount named volume
docker run -d --name db \
  -v myvolume:/var/lib/mysql \
  mysql:8.0

# Mount bind mount
docker run -d --name web \
  -v /host/path:/container/path \
  nginx

# Mount with specific options
docker run -d --name app \
  -v myvolume:/data:ro \
  alpine sleep 3600

# Mount tmpfs
docker run -d --name temp \
  --tmpfs /tmp:rw,noexec,nosuid,size=100m \
  alpine sleep 3600

Network Configuration Examples

Multi-Container Application

# Create application network
docker network create app-network

# Database container
docker run -d --name database \
  --network app-network \
  -e MYSQL_ROOT_PASSWORD=rootpass \
  -e MYSQL_DATABASE=appdb \
  -e MYSQL_USER=appuser \
  -e MYSQL_PASSWORD=apppass \
  mysql:8.0

# Application container
docker run -d --name backend \
  --network app-network \
  -e DATABASE_URL=mysql://appuser:apppass@database:3306/appdb \
  myapp:backend

# Frontend container with port exposure
docker run -d --name frontend \
  --network app-network \
  -p 3000:3000 \
  -e API_URL=http://backend:8000 \
  myapp:frontend

# Test connectivity
docker exec frontend curl http://backend:8000/health
docker exec backend mysql -h database -u appuser -p appdb

Network Isolation

# Create isolated networks
docker network create --internal backend-network
docker network create frontend-network

# Database (backend only)
docker run -d --name db \
  --network backend-network \
  postgres:13

# API server (both networks)
docker run -d --name api \
  --network backend-network \
  myapp:api

docker network connect frontend-network api

# Web server (frontend only)
docker run -d --name web \
  --network frontend-network \
  -p 80:80 \
  nginx

# Database is not accessible from web server
docker exec web ping db  # This will fail

Storage Configuration Examples

Database with Persistent Storage

# Create volume for database
docker volume create postgres-data

# Run PostgreSQL with persistent storage
docker run -d --name postgres \
  -v postgres-data:/var/lib/postgresql/data \
  -e POSTGRES_DB=myapp \
  -e POSTGRES_USER=user \
  -e POSTGRES_PASSWORD=password \
  postgres:13

# Backup database
docker exec postgres pg_dump -U user myapp > backup.sql

# Restore database
docker exec -i postgres psql -U user myapp < backup.sql

Application with Configuration

# Create volumes
docker volume create app-data
docker volume create app-config

# Run application with multiple mounts
docker run -d --name myapp \
  -v app-data:/app/data \
  -v app-config:/app/config \
  -v /host/logs:/app/logs \
  -v /etc/localtime:/etc/localtime:ro \
  myapp:latest

# Initialize configuration
docker exec myapp cp /app/config.template.json /app/config/config.json

Shared Storage Between Containers

# Create shared volume
docker volume create shared-data

# Writer container
docker run -d --name writer \
  -v shared-data:/data \
  alpine sh -c 'while true; do echo "$(date)" >> /data/log.txt; sleep 10; done'

# Reader container
docker run -d --name reader \
  -v shared-data:/data:ro \
  alpine sh -c 'while true; do tail -f /data/log.txt; sleep 1; done'

# Monitor shared data
docker logs reader

Docker Compose Integration

Network and Storage in Compose

version: '3.8'

services:
  web:
    image: nginx
    ports:
      - "80:80"
    volumes:
      - ./html:/usr/share/nginx/html:ro
      - web-logs:/var/log/nginx
    networks:
      - frontend
    depends_on:
      - api

  api:
    build: ./api
    volumes:
      - api-data:/app/data
      - ./config:/app/config:ro
    networks:
      - frontend
      - backend
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/myapp
    depends_on:
      - db

  db:
    image: postgres:13
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    networks:
      - backend
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass

volumes:
  postgres-data:
  api-data:
  web-logs:

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true

Advanced Network Configuration

version: '3.8'

services:
  app:
    image: myapp
    networks:
      app-network:
        ipv4_address: 172.20.0.10
        aliases:
          - api-server
          - backend

  db:
    image: postgres:13
    networks:
      app-network:
        ipv4_address: 172.20.0.20

networks:
  app-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.0.0/16
          gateway: 172.20.0.1
    driver_opts:
      com.docker.network.bridge.name: app-bridge
      com.docker.network.bridge.enable_icc: "true"
      com.docker.network.bridge.enable_ip_masquerade: "true"

Troubleshooting Common Issues

Network Connectivity

# Check container networking
docker exec container ip addr show
docker exec container ip route show
docker exec container netstat -tlnp

# Test connectivity between containers
docker exec container1 ping container2
docker exec container1 telnet container2 port
docker exec container1 nslookup container2

# Inspect network configuration
docker network inspect network-name
docker inspect container-name | grep -A 20 NetworkSettings

Storage Issues

# Check volume mounts
docker inspect container-name | grep -A 10 Mounts

# Check volume usage
docker system df
docker volume ls
docker volume inspect volume-name

# Check file permissions
docker exec container ls -la /mount/point
docker exec container id  # Check user/group

# Debug storage issues
docker run --rm -v volume-name:/data alpine ls -la /data
docker run --rm -v /host/path:/data alpine ls -la /data

Performance Monitoring

# Monitor network traffic
docker exec container netstat -i
docker exec container ss -tuln

# Monitor storage I/O
docker exec container iostat -x 1
docker exec container df -h

# Container resource usage
docker stats
docker exec container top

Summary

In this introduction, you’ve learned:

Networking Fundamentals

  • Network Types: Bridge, host, overlay, and custom networks
  • Network Commands: Creating, managing, and troubleshooting networks
  • Container Connectivity: Service discovery and inter-container communication

Storage Fundamentals

  • Storage Types: Volumes, bind mounts, and tmpfs
  • Volume Management: Creating, mounting, and managing persistent storage
  • Data Persistence: Database storage and application data management

Practical Applications

  • Multi-Container Apps: Network isolation and service communication
  • Docker Compose: Declarative network and storage configuration
  • Troubleshooting: Common issues and debugging techniques

Key Concepts Mastered

  • Network Isolation: Separating frontend and backend services
  • Data Persistence: Ensuring data survives container restarts
  • Service Discovery: Container-to-container communication
  • Configuration Management: Mounting config files and secrets

Next Steps: Part 2 explores core concepts including advanced networking patterns, storage drivers, and performance optimization techniques that form the foundation of production-ready Docker deployments.

Core Concepts and Fundamentals

Core Networking and Storage Concepts

This section explores advanced networking patterns, storage drivers, and performance optimization techniques essential for production Docker deployments.

Advanced Networking Concepts

Network Namespaces and Isolation

# Inspect container network namespace
docker exec container ip netns list
docker exec container ls -la /proc/self/ns/

# Create container with host networking
docker run -d --name host-net --network host nginx

# Create container with no networking
docker run -d --name no-net --network none alpine sleep 3600

# Share network namespace between containers
docker run -d --name primary alpine sleep 3600
docker run -d --name secondary --network container:primary alpine sleep 3600

Custom Network Drivers

# Create macvlan network
docker network create -d macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  -o parent=eth0 \
  macvlan-net

# Create ipvlan network
docker network create -d ipvlan \
  --subnet=192.168.2.0/24 \
  --gateway=192.168.2.1 \
  -o parent=eth0 \
  -o ipvlan_mode=l2 \
  ipvlan-net

# Use external network plugin
docker network create -d weave \
  --subnet=10.32.0.0/12 \
  weave-net

Network Security and Policies

# Create network with encryption (overlay)
docker network create \
  --driver overlay \
  --opt encrypted \
  --subnet=10.0.0.0/24 \
  secure-overlay

# Network with custom iptables rules
docker network create \
  --driver bridge \
  --opt com.docker.network.bridge.enable_icc=false \
  --opt com.docker.network.bridge.enable_ip_masquerade=true \
  isolated-bridge

# Container with specific security options
docker run -d --name secure-app \
  --network isolated-bridge \
  --security-opt no-new-privileges \
  --cap-drop ALL \
  --cap-add NET_BIND_SERVICE \
  nginx

Advanced Storage Concepts

Storage Drivers and Backends

# Local driver with specific options
docker volume create \
  --driver local \
  --opt type=ext4 \
  --opt device=/dev/sdb1 \
  local-volume

# NFS volume
docker volume create \
  --driver local \
  --opt type=nfs \
  --opt o=addr=192.168.1.100,rw \
  --opt device=:/exports/data \
  nfs-volume

# CIFS/SMB volume
docker volume create \
  --driver local \
  --opt type=cifs \
  --opt o=username=user,password=pass,uid=1000,gid=1000 \
  --opt device=//server/share \
  cifs-volume

# Encrypted volume
docker volume create \
  --driver local \
  --opt type=ext4 \
  --opt o=loop,encryption=aes256 \
  encrypted-volume

Volume Plugins

# Install volume plugin
docker plugin install rexray/ebs

# Create EBS volume
docker volume create \
  --driver rexray/ebs \
  --opt size=10 \
  --opt volumetype=gp2 \
  ebs-volume

# Use cloud storage
docker volume create \
  --driver rexray/s3fs \
  --opt bucket=my-bucket \
  --opt region=us-east-1 \
  s3-volume

Storage Performance Optimization

# docker-compose.yml with optimized storage
version: '3.8'

services:
  database:
    image: postgres:13
    volumes:
      # Separate data and WAL for performance
      - postgres-data:/var/lib/postgresql/data
      - postgres-wal:/var/lib/postgresql/wal
      # tmpfs for temporary files
      - type: tmpfs
        target: /tmp
        tmpfs:
          size: 1G
      # Shared memory for PostgreSQL
      - type: tmpfs
        target: /dev/shm
        tmpfs:
          size: 2G
    environment:
      - POSTGRES_INITDB_WALDIR=/var/lib/postgresql/wal

  redis:
    image: redis:7-alpine
    volumes:
      - redis-data:/data
      # tmpfs for Redis working directory
      - type: tmpfs
        target: /tmp
        tmpfs:
          size: 512M
    command: |
      redis-server
      --save 900 1
      --save 300 10
      --save 60 10000
      --dir /data

volumes:
  postgres-data:
    driver: local
    driver_opts:
      type: ext4
      o: noatime,nodiratime
  postgres-wal:
    driver: local
    driver_opts:
      type: ext4
      o: noatime,nodiratime,sync
  redis-data:
    driver: local

Network Performance and Monitoring

Network Performance Tuning

# Optimize network performance
docker run -d --name optimized-app \
  --network custom-bridge \
  --sysctl net.core.rmem_max=134217728 \
  --sysctl net.core.wmem_max=134217728 \
  --sysctl net.ipv4.tcp_rmem="4096 65536 134217728" \
  --sysctl net.ipv4.tcp_wmem="4096 65536 134217728" \
  --sysctl net.ipv4.tcp_congestion_control=bbr \
  myapp

# Container with increased network buffers
docker run -d --name high-throughput \
  --ulimit nofile=65536:65536 \
  --sysctl net.core.netdev_max_backlog=5000 \
  --sysctl net.core.netdev_budget=600 \
  nginx

Network Monitoring

# Monitor network statistics
docker exec container cat /proc/net/dev
docker exec container ss -tuln
docker exec container netstat -i

# Network performance testing
docker run --rm --network container:target \
  nicolaka/netshoot iperf3 -c server-ip

# Packet capture
docker run --rm --network container:target \
  nicolaka/netshoot tcpdump -i eth0 -w capture.pcap

# Network troubleshooting toolkit
docker run -it --rm --network container:target \
  nicolaka/netshoot

Storage Performance and Monitoring

Storage Performance Tuning

# High-performance storage mount
docker run -d --name fast-db \
  -v fast-storage:/var/lib/mysql \
  --mount type=tmpfs,destination=/tmp,tmpfs-size=1G \
  --device-read-bps /dev/sda:100mb \
  --device-write-bps /dev/sda:100mb \
  mysql:8.0

# Storage with specific I/O scheduler
docker volume create \
  --driver local \
  --opt type=ext4 \
  --opt o=noatime,data=writeback \
  fast-volume

Storage Monitoring

# Monitor storage usage
docker system df -v
docker exec container df -h
docker exec container iostat -x 1

# Volume inspection
docker volume inspect volume-name
docker exec container lsblk
docker exec container mount | grep volume

# Storage performance testing
docker run --rm -v test-volume:/data \
  alpine dd if=/dev/zero of=/data/testfile bs=1M count=1000

Multi-Host Networking

Docker Swarm Overlay Networks

# Initialize swarm
docker swarm init --advertise-addr 192.168.1.10

# Create overlay network
docker network create \
  --driver overlay \
  --subnet=10.0.0.0/24 \
  --gateway=10.0.0.1 \
  --attachable \
  multi-host-net

# Deploy service on overlay network
docker service create \
  --name web \
  --network multi-host-net \
  --replicas 3 \
  nginx

# Inspect overlay network
docker network inspect multi-host-net

External Network Integration

# Connect to external network
docker network create \
  --driver bridge \
  --subnet=172.20.0.0/16 \
  --gateway=172.20.0.1 \
  --opt com.docker.network.bridge.name=docker-ext \
  external-bridge

# Route traffic to external services
docker run -d --name proxy \
  --network external-bridge \
  -p 80:80 \
  -v ./nginx.conf:/etc/nginx/nginx.conf \
  nginx

Security Best Practices

Network Security

# Secure network configuration
version: '3.8'

services:
  web:
    image: nginx
    networks:
      - frontend
    ports:
      - "443:443"
    volumes:
      - ./ssl:/etc/nginx/ssl:ro

  app:
    image: myapp
    networks:
      - frontend
      - backend
    # No exposed ports - only accessible through nginx

  db:
    image: postgres:13
    networks:
      - backend  # Isolated from frontend
    volumes:
      - db-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_SSL_MODE=require

networks:
  frontend:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.enable_icc: "false"
  backend:
    driver: bridge
    internal: true  # No external access

volumes:
  db-data:
    driver: local
    driver_opts:
      type: ext4
      o: noatime,nodev,nosuid

Storage Security

# Encrypted volume with specific permissions
docker volume create \
  --driver local \
  --opt type=ext4 \
  --opt o=encryption=aes256,uid=1000,gid=1000,mode=0700 \
  secure-volume

# Read-only mounts for security
docker run -d --name secure-app \
  -v config-volume:/app/config:ro \
  -v secrets-volume:/app/secrets:ro,Z \
  --read-only \
  --tmpfs /tmp:noexec,nosuid,size=100m \
  myapp

# SELinux labels for enhanced security
docker run -d --name selinux-app \
  -v data-volume:/app/data:Z \
  -v logs-volume:/app/logs:z \
  --security-opt label=type:container_t \
  myapp

Backup and Recovery

Network Configuration Backup

#!/bin/bash
# backup-networks.sh

# Backup network configurations
mkdir -p network-backup
for network in $(docker network ls --format "{{.Name}}" | grep -v bridge | grep -v host | grep -v none); do
    docker network inspect $network > network-backup/$network.json
done

# Restore networks
for config in network-backup/*.json; do
    network_name=$(basename $config .json)
    # Parse JSON and recreate network
    docker network create --driver $(jq -r '.[0].Driver' $config) $network_name
done

Storage Backup Strategies

#!/bin/bash
# backup-volumes.sh

# Backup volume data
backup_volume() {
    local volume_name=$1
    local backup_path=$2
    
    docker run --rm \
        -v $volume_name:/source:ro \
        -v $backup_path:/backup \
        alpine tar czf /backup/$volume_name-$(date +%Y%m%d).tar.gz -C /source .
}

# Restore volume data
restore_volume() {
    local volume_name=$1
    local backup_file=$2
    
    docker run --rm \
        -v $volume_name:/target \
        -v $(dirname $backup_file):/backup \
        alpine tar xzf /backup/$(basename $backup_file) -C /target
}

# Backup all volumes
for volume in $(docker volume ls -q); do
    backup_volume $volume ./backups
done

Summary

This section covered core networking and storage concepts:

Advanced Networking

  • Network Drivers: Macvlan, ipvlan, and custom network plugins
  • Security Policies: Network isolation and encrypted communications
  • Performance Tuning: Kernel parameters and buffer optimization
  • Multi-Host: Overlay networks and external integrations

Advanced Storage

  • Storage Drivers: NFS, CIFS, and cloud storage backends
  • Volume Plugins: Third-party storage solutions and cloud integration
  • Performance: I/O optimization and monitoring techniques
  • Security: Encryption, permissions, and SELinux integration

Operational Excellence

  • Monitoring: Network and storage performance metrics
  • Troubleshooting: Debugging tools and techniques
  • Backup/Recovery: Configuration and data protection strategies
  • Security: Best practices for production deployments

Next Steps: Part 3 explores practical applications including real-world networking architectures, storage patterns, and enterprise deployment scenarios.

Practical Applications and Examples

Practical Networking and Storage Applications

This section demonstrates real-world Docker networking and storage scenarios, from microservices architectures to high-availability deployments and enterprise storage solutions.

Microservices Network Architecture

Complete E-Commerce Platform

# docker-compose.microservices.yml
version: '3.8'

services:
  # API Gateway
  api-gateway:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
    networks:
      - frontend
      - monitoring
    depends_on:
      - user-service
      - product-service
      - order-service

  # User Service
  user-service:
    build: ./services/user
    networks:
      - frontend
      - backend
      - user-db-net
    volumes:
      - user-logs:/app/logs
    environment:
      - DATABASE_URL=postgresql://user:pass@user-db:5432/users
      - REDIS_URL=redis://redis-cluster:6379/0
    depends_on:
      - user-db
      - redis-cluster

  user-db:
    image: postgres:13
    networks:
      - user-db-net
    volumes:
      - user-db-data:/var/lib/postgresql/data
      - user-db-config:/etc/postgresql
    environment:
      - POSTGRES_DB=users
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass

  # Product Service
  product-service:
    build: ./services/product
    networks:
      - frontend
      - backend
      - product-db-net
    volumes:
      - product-logs:/app/logs
      - product-images:/app/images
    environment:
      - DATABASE_URL=postgresql://product:pass@product-db:5432/products
      - ELASTICSEARCH_URL=http://elasticsearch:9200
    depends_on:
      - product-db
      - elasticsearch

  product-db:
    image: postgres:13
    networks:
      - product-db-net
    volumes:
      - product-db-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=products
      - POSTGRES_USER=product
      - POSTGRES_PASSWORD=pass

  # Order Service
  order-service:
    build: ./services/order
    networks:
      - frontend
      - backend
      - order-db-net
    volumes:
      - order-logs:/app/logs
    environment:
      - DATABASE_URL=postgresql://order:pass@order-db:5432/orders
      - RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672/
    depends_on:
      - order-db
      - rabbitmq

  order-db:
    image: postgres:13
    networks:
      - order-db-net
    volumes:
      - order-db-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB=orders
      - POSTGRES_USER=order
      - POSTGRES_PASSWORD=pass

  # Shared Services
  redis-cluster:
    image: redis:7-alpine
    networks:
      - backend
    volumes:
      - redis-data:/data
    command: |
      redis-server
      --cluster-enabled yes
      --cluster-config-file nodes.conf
      --cluster-node-timeout 5000
      --appendonly yes

  elasticsearch:
    image: elasticsearch:7.17.0
    networks:
      - backend
    volumes:
      - elasticsearch-data:/usr/share/elasticsearch/data
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"

  rabbitmq:
    image: rabbitmq:3-management
    networks:
      - backend
    volumes:
      - rabbitmq-data:/var/lib/rabbitmq
    environment:
      - RABBITMQ_DEFAULT_USER=guest
      - RABBITMQ_DEFAULT_PASS=guest

  # Monitoring
  prometheus:
    image: prom/prometheus
    networks:
      - monitoring
    volumes:
      - prometheus-data:/prometheus
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro

networks:
  frontend:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.0.0/24
  backend:
    driver: bridge
    internal: true
    ipam:
      config:
        - subnet: 172.21.0.0/24
  user-db-net:
    driver: bridge
    internal: true
  product-db-net:
    driver: bridge
    internal: true
  order-db-net:
    driver: bridge
    internal: true
  monitoring:
    driver: bridge

volumes:
  user-db-data:
  product-db-data:
  order-db-data:
  user-db-config:
  redis-data:
  elasticsearch-data:
  rabbitmq-data:
  prometheus-data:
  user-logs:
  product-logs:
  order-logs:
  product-images:

High-Availability Storage Setup

Database Cluster with Replication

# docker-compose.ha-database.yml
version: '3.8'

services:
  # PostgreSQL Master
  postgres-master:
    image: postgres:13
    networks:
      - db-cluster
    volumes:
      - postgres-master-data:/var/lib/postgresql/data
      - postgres-master-config:/etc/postgresql
      - ./postgres/master.conf:/etc/postgresql/postgresql.conf:ro
      - ./postgres/pg_hba.conf:/etc/postgresql/pg_hba.conf:ro
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=app
      - POSTGRES_PASSWORD=password
      - POSTGRES_REPLICATION_MODE=master
      - POSTGRES_REPLICATION_USER=replicator
      - POSTGRES_REPLICATION_PASSWORD=repl_password
    command: |
      postgres
      -c config_file=/etc/postgresql/postgresql.conf
      -c hba_file=/etc/postgresql/pg_hba.conf

  # PostgreSQL Slave 1
  postgres-slave1:
    image: postgres:13
    networks:
      - db-cluster
    volumes:
      - postgres-slave1-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_REPLICATION_MODE=slave
      - POSTGRES_MASTER_HOST=postgres-master
      - POSTGRES_REPLICATION_USER=replicator
      - POSTGRES_REPLICATION_PASSWORD=repl_password
    depends_on:
      - postgres-master

  # PostgreSQL Slave 2
  postgres-slave2:
    image: postgres:13
    networks:
      - db-cluster
    volumes:
      - postgres-slave2-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_REPLICATION_MODE=slave
      - POSTGRES_MASTER_HOST=postgres-master
      - POSTGRES_REPLICATION_USER=replicator
      - POSTGRES_REPLICATION_PASSWORD=repl_password
    depends_on:
      - postgres-master

  # PgBouncer Connection Pooler
  pgbouncer:
    image: pgbouncer/pgbouncer
    networks:
      - db-cluster
      - app-network
    volumes:
      - ./pgbouncer/pgbouncer.ini:/etc/pgbouncer/pgbouncer.ini:ro
      - ./pgbouncer/userlist.txt:/etc/pgbouncer/userlist.txt:ro
    environment:
      - DATABASES_HOST=postgres-master
      - DATABASES_PORT=5432
      - DATABASES_USER=app
      - DATABASES_PASSWORD=password
      - DATABASES_DBNAME=myapp
    depends_on:
      - postgres-master

  # HAProxy for Load Balancing
  haproxy:
    image: haproxy:2.6
    networks:
      - db-cluster
      - app-network
    volumes:
      - ./haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
    ports:
      - "5432:5432"  # PostgreSQL
      - "8404:8404"  # Stats
    depends_on:
      - postgres-master
      - postgres-slave1
      - postgres-slave2

networks:
  db-cluster:
    driver: bridge
    internal: true
  app-network:
    driver: bridge

volumes:
  postgres-master-data:
    driver: local
    driver_opts:
      type: ext4
      o: noatime
  postgres-slave1-data:
    driver: local
  postgres-slave2-data:
    driver: local
  postgres-master-config:
    driver: local

Distributed Storage Solutions

GlusterFS Distributed Storage

# docker-compose.glusterfs.yml
version: '3.8'

services:
  # GlusterFS Node 1
  gluster1:
    image: gluster/gluster-centos
    privileged: true
    networks:
      storage-network:
        ipv4_address: 172.25.0.10
    volumes:
      - gluster1-data:/data
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    hostname: gluster1

  # GlusterFS Node 2
  gluster2:
    image: gluster/gluster-centos
    privileged: true
    networks:
      storage-network:
        ipv4_address: 172.25.0.11
    volumes:
      - gluster2-data:/data
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    hostname: gluster2

  # GlusterFS Node 3
  gluster3:
    image: gluster/gluster-centos
    privileged: true
    networks:
      storage-network:
        ipv4_address: 172.25.0.12
    volumes:
      - gluster3-data:/data
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    hostname: gluster3

  # Application using GlusterFS
  app:
    image: myapp
    networks:
      - storage-network
      - app-network
    volumes:
      - type: volume
        source: distributed-storage
        target: /app/data
        volume:
          driver: local
          driver_opts:
            type: glusterfs
            o: "addr=172.25.0.10:172.25.0.11:172.25.0.12"
            device: "gv0"
    depends_on:
      - gluster1
      - gluster2
      - gluster3

networks:
  storage-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.25.0.0/24
  app-network:
    driver: bridge

volumes:
  gluster1-data:
  gluster2-data:
  gluster3-data:
  distributed-storage:
    external: true

Ceph Storage Cluster

# docker-compose.ceph.yml
version: '3.8'

services:
  # Ceph Monitor
  ceph-mon:
    image: ceph/ceph:latest
    networks:
      - ceph-network
    volumes:
      - ceph-mon-data:/var/lib/ceph/mon
      - ceph-config:/etc/ceph
    environment:
      - MON_IP=172.26.0.10
      - CEPH_PUBLIC_NETWORK=172.26.0.0/24
    command: mon

  # Ceph OSD 1
  ceph-osd1:
    image: ceph/ceph:latest
    privileged: true
    networks:
      - ceph-network
    volumes:
      - ceph-osd1-data:/var/lib/ceph/osd
      - ceph-config:/etc/ceph
    environment:
      - OSD_DEVICE=/dev/loop0
      - OSD_TYPE=directory
    depends_on:
      - ceph-mon
    command: osd

  # Ceph OSD 2
  ceph-osd2:
    image: ceph/ceph:latest
    privileged: true
    networks:
      - ceph-network
    volumes:
      - ceph-osd2-data:/var/lib/ceph/osd
      - ceph-config:/etc/ceph
    environment:
      - OSD_DEVICE=/dev/loop1
      - OSD_TYPE=directory
    depends_on:
      - ceph-mon
    command: osd

  # Ceph MDS (Metadata Server)
  ceph-mds:
    image: ceph/ceph:latest
    networks:
      - ceph-network
    volumes:
      - ceph-mds-data:/var/lib/ceph/mds
      - ceph-config:/etc/ceph
    depends_on:
      - ceph-mon
    command: mds

networks:
  ceph-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.26.0.0/24

volumes:
  ceph-mon-data:
  ceph-osd1-data:
  ceph-osd2-data:
  ceph-mds-data:
  ceph-config:

Network Security Implementations

Zero-Trust Network Architecture

# docker-compose.zero-trust.yml
version: '3.8'

services:
  # Istio Proxy (Envoy)
  istio-proxy:
    image: istio/proxyv2:latest
    networks:
      - service-mesh
    volumes:
      - ./istio/envoy.yaml:/etc/envoy/envoy.yaml:ro
      - istio-certs:/etc/ssl/certs
    command: |
      /usr/local/bin/envoy
      -c /etc/envoy/envoy.yaml
      --service-cluster proxy

  # Application with mTLS
  secure-app:
    build: ./secure-app
    networks:
      - service-mesh
    volumes:
      - app-certs:/app/certs:ro
    environment:
      - TLS_CERT_FILE=/app/certs/app.crt
      - TLS_KEY_FILE=/app/certs/app.key
      - CA_CERT_FILE=/app/certs/ca.crt

  # Certificate Authority
  cert-manager:
    image: jetstack/cert-manager-controller:latest
    networks:
      - service-mesh
    volumes:
      - ca-data:/var/lib/cert-manager
      - ./cert-manager/config.yaml:/etc/cert-manager/config.yaml:ro

  # Network Policy Controller
  network-policy:
    image: calico/kube-controllers:latest
    networks:
      - service-mesh
    volumes:
      - ./network-policies:/etc/calico/policies:ro
    environment:
      - ENABLED_CONTROLLERS=policy

networks:
  service-mesh:
    driver: bridge
    driver_opts:
      com.docker.network.bridge.enable_icc: "false"
    ipam:
      config:
        - subnet: 172.27.0.0/24

volumes:
  istio-certs:
  app-certs:
  ca-data:

Performance Optimization Examples

High-Performance Web Stack

# docker-compose.performance.yml
version: '3.8'

services:
  # Nginx with optimized configuration
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    networks:
      - frontend
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - nginx-cache:/var/cache/nginx
      - type: tmpfs
        target: /tmp
        tmpfs:
          size: 512M
    sysctls:
      - net.core.somaxconn=65535
      - net.ipv4.tcp_max_syn_backlog=65535
    ulimits:
      nofile:
        soft: 65535
        hard: 65535

  # Application with performance tuning
  app:
    build: ./app
    networks:
      - frontend
      - backend
    volumes:
      - app-data:/app/data
      - type: tmpfs
        target: /app/tmp
        tmpfs:
          size: 1G
    environment:
      - NODE_ENV=production
      - UV_THREADPOOL_SIZE=128
    sysctls:
      - net.core.rmem_max=134217728
      - net.core.wmem_max=134217728
    deploy:
      resources:
        limits:
          cpus: '2.0'
          memory: 4G
        reservations:
          cpus: '1.0'
          memory: 2G

  # Redis with performance optimization
  redis:
    image: redis:7-alpine
    networks:
      - backend
    volumes:
      - redis-data:/data
    command: |
      redis-server
      --maxmemory 2gb
      --maxmemory-policy allkeys-lru
      --tcp-backlog 511
      --tcp-keepalive 300
      --save 900 1
      --save 300 10
      --save 60 10000
    sysctls:
      - net.core.somaxconn=65535

  # PostgreSQL with performance tuning
  postgres:
    image: postgres:13
    networks:
      - backend
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - postgres-wal:/var/lib/postgresql/wal
      - type: tmpfs
        target: /tmp
        tmpfs:
          size: 2G
    environment:
      - POSTGRES_INITDB_WALDIR=/var/lib/postgresql/wal
    command: |
      postgres
      -c shared_buffers=1GB
      -c effective_cache_size=3GB
      -c maintenance_work_mem=256MB
      -c checkpoint_completion_target=0.9
      -c wal_buffers=16MB
      -c default_statistics_target=100
      -c random_page_cost=1.1
      -c effective_io_concurrency=200
      -c work_mem=4MB
      -c min_wal_size=1GB
      -c max_wal_size=4GB

networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge
    internal: true

volumes:
  nginx-cache:
  app-data:
  redis-data:
  postgres-data:
    driver: local
    driver_opts:
      type: ext4
      o: noatime,nodiratime
  postgres-wal:
    driver: local
    driver_opts:
      type: ext4
      o: noatime,nodiratime,sync

Summary

This section demonstrated practical networking and storage applications:

Microservices Architecture

  • Service Isolation: Network segmentation for different service tiers
  • Database Separation: Isolated networks for each service’s database
  • Shared Services: Common infrastructure like Redis and message queues

High Availability

  • Database Clustering: Master-slave replication with connection pooling
  • Load Balancing: HAProxy for database and application load distribution
  • Distributed Storage: GlusterFS and Ceph for scalable storage solutions

Security Implementation

  • Zero-Trust Networks: mTLS and certificate management
  • Network Policies: Traffic control and access restrictions
  • Encryption: TLS termination and secure communications

Performance Optimization

  • Resource Tuning: Kernel parameters and system limits
  • Storage Optimization: Separate volumes for different data types
  • Caching Strategies: Multiple caching layers for improved performance

Next Steps: Part 4 covers advanced techniques including custom network plugins, storage drivers, and enterprise-grade networking solutions.

Advanced Techniques and Patterns

Advanced Networking and Storage Techniques

This section explores sophisticated Docker networking and storage patterns including custom plugins, software-defined networking, and enterprise-grade storage solutions.

Custom Network Plugins

CNI Plugin Development

// cni-plugin/main.go
package main

import (
    "encoding/json"
    "fmt"
    "net"
    "os"
    
    "github.com/containernetworking/cni/pkg/skel"
    "github.com/containernetworking/cni/pkg/types"
    "github.com/containernetworking/cni/pkg/version"
)

type NetConf struct {
    types.NetConf
    Bridge   string `json:"bridge"`
    Subnet   string `json:"subnet"`
    Gateway  string `json:"gateway"`
    IPAM     struct {
        Type   string `json:"type"`
        Subnet string `json:"subnet"`
    } `json:"ipam"`
}

func cmdAdd(args *skel.CmdArgs) error {
    conf := NetConf{}
    if err := json.Unmarshal(args.StdinData, &conf); err != nil {
        return fmt.Errorf("failed to parse network configuration: %v", err)
    }
    
    // Create bridge if it doesn't exist
    if err := createBridge(conf.Bridge); err != nil {
        return fmt.Errorf("failed to create bridge: %v", err)
    }
    
    // Allocate IP address
    ip, err := allocateIP(conf.Subnet)
    if err != nil {
        return fmt.Errorf("failed to allocate IP: %v", err)
    }
    
    // Create veth pair
    hostVeth, containerVeth, err := createVethPair(args.ContainerID)
    if err != nil {
        return fmt.Errorf("failed to create veth pair: %v", err)
    }
    
    // Attach host veth to bridge
    if err := attachToBridge(hostVeth, conf.Bridge); err != nil {
        return fmt.Errorf("failed to attach to bridge: %v", err)
    }
    
    // Move container veth to container namespace
    if err := moveToNamespace(containerVeth, args.Netns); err != nil {
        return fmt.Errorf("failed to move to namespace: %v", err)
    }
    
    // Configure container interface
    if err := configureInterface(containerVeth, ip, conf.Gateway, args.Netns); err != nil {
        return fmt.Errorf("failed to configure interface: %v", err)
    }
    
    // Return result
    result := &types.Result{
        IP4: &types.IPConfig{
            IP:      net.IPNet{IP: ip, Mask: net.CIDRMask(24, 32)},
            Gateway: net.ParseIP(conf.Gateway),
        },
    }
    
    return result.Print()
}

func cmdDel(args *skel.CmdArgs) error {
    // Cleanup network resources
    return deleteVethPair(args.ContainerID)
}

func main() {
    skel.PluginMain(cmdAdd, cmdDel, version.All)
}

// Helper functions
func createBridge(name string) error {
    // Implementation for bridge creation
    return nil
}

func allocateIP(subnet string) (net.IP, error) {
    // Implementation for IP allocation
    return net.ParseIP("192.168.1.100"), nil
}

func createVethPair(containerID string) (string, string, error) {
    // Implementation for veth pair creation
    hostVeth := fmt.Sprintf("veth%s", containerID[:8])
    containerVeth := fmt.Sprintf("eth%s", containerID[:8])
    return hostVeth, containerVeth, nil
}

func attachToBridge(veth, bridge string) error {
    // Implementation for bridge attachment
    return nil
}

func moveToNamespace(veth, netns string) error {
    // Implementation for namespace movement
    return nil
}

func configureInterface(iface string, ip net.IP, gateway string, netns string) error {
    // Implementation for interface configuration
    return nil
}

func deleteVethPair(containerID string) error {
    // Implementation for cleanup
    return nil
}

Docker Network Plugin

// docker-plugin/main.go
package main

import (
    "encoding/json"
    "fmt"
    "net/http"
    
    "github.com/docker/go-plugins-helpers/network"
)

type CustomDriver struct {
    networks map[string]*NetworkState
}

type NetworkState struct {
    ID       string
    Name     string
    Subnet   string
    Gateway  string
    Bridge   string
}

func (d *CustomDriver) GetCapabilities() (*network.CapabilitiesResponse, error) {
    return &network.CapabilitiesResponse{
        Scope:             network.LocalScope,
        ConnectivityScope: network.LocalScope,
    }, nil
}

func (d *CustomDriver) CreateNetwork(req *network.CreateNetworkRequest) error {
    // Parse network options
    subnet := req.Options["subnet"]
    gateway := req.Options["gateway"]
    bridge := fmt.Sprintf("br-%s", req.NetworkID[:12])
    
    // Create network state
    d.networks[req.NetworkID] = &NetworkState{
        ID:      req.NetworkID,
        Name:    req.NetworkID,
        Subnet:  subnet,
        Gateway: gateway,
        Bridge:  bridge,
    }
    
    // Create actual network infrastructure
    return d.createNetworkInfrastructure(d.networks[req.NetworkID])
}

func (d *CustomDriver) DeleteNetwork(req *network.DeleteNetworkRequest) error {
    state, exists := d.networks[req.NetworkID]
    if !exists {
        return fmt.Errorf("network %s not found", req.NetworkID)
    }
    
    // Cleanup network infrastructure
    if err := d.deleteNetworkInfrastructure(state); err != nil {
        return err
    }
    
    delete(d.networks, req.NetworkID)
    return nil
}

func (d *CustomDriver) CreateEndpoint(req *network.CreateEndpointRequest) (*network.CreateEndpointResponse, error) {
    state := d.networks[req.NetworkID]
    
    // Create endpoint (veth pair, IP allocation, etc.)
    ip, err := d.allocateIP(state.Subnet)
    if err != nil {
        return nil, err
    }
    
    return &network.CreateEndpointResponse{
        Interface: &network.EndpointInterface{
            Address:    ip,
            MacAddress: generateMAC(),
        },
    }, nil
}

func (d *CustomDriver) Join(req *network.JoinRequest) (*network.JoinResponse, error) {
    state := d.networks[req.NetworkID]
    
    // Create veth pair and configure
    hostVeth, containerVeth, err := d.createVethPair(req.EndpointID)
    if err != nil {
        return nil, err
    }
    
    // Attach to bridge
    if err := d.attachToBridge(hostVeth, state.Bridge); err != nil {
        return nil, err
    }
    
    return &network.JoinResponse{
        InterfaceName: network.InterfaceName{
            SrcName:   containerVeth,
            DstPrefix: "eth",
        },
        Gateway: state.Gateway,
    }, nil
}

func (d *CustomDriver) Leave(req *network.LeaveRequest) error {
    // Cleanup endpoint resources
    return d.deleteVethPair(req.EndpointID)
}

func (d *CustomDriver) DeleteEndpoint(req *network.DeleteEndpointRequest) error {
    // Cleanup endpoint state
    return nil
}

func main() {
    driver := &CustomDriver{
        networks: make(map[string]*NetworkState),
    }
    
    handler := network.NewHandler(driver)
    http.ListenAndServe(":8080", handler)
}

// Helper methods
func (d *CustomDriver) createNetworkInfrastructure(state *NetworkState) error {
    // Implementation for creating bridges, iptables rules, etc.
    return nil
}

func (d *CustomDriver) deleteNetworkInfrastructure(state *NetworkState) error {
    // Implementation for cleanup
    return nil
}

func (d *CustomDriver) allocateIP(subnet string) (string, error) {
    // Implementation for IP allocation
    return "192.168.1.100/24", nil
}

func (d *CustomDriver) createVethPair(endpointID string) (string, string, error) {
    // Implementation for veth pair creation
    return "veth" + endpointID[:8], "eth" + endpointID[:8], nil
}

func (d *CustomDriver) attachToBridge(veth, bridge string) error {
    // Implementation for bridge attachment
    return nil
}

func (d *CustomDriver) deleteVethPair(endpointID string) error {
    // Implementation for cleanup
    return nil
}

func generateMAC() string {
    // Implementation for MAC address generation
    return "02:42:ac:11:00:02"
}

Software-Defined Networking

OpenVSwitch Integration

# docker-compose.ovs.yml
version: '3.8'

services:
  # OpenVSwitch Database
  ovs-db:
    image: openvswitch/ovs:latest
    privileged: true
    networks:
      - ovs-control
    volumes:
      - ovs-db-data:/var/lib/openvswitch
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    command: |
      sh -c "
        ovsdb-server --remote=punix:/var/run/openvswitch/db.sock \
                     --remote=db:Open_vSwitch,Open_vSwitch,manager_options \
                     --pidfile --detach
        ovs-vsctl --no-wait init
        tail -f /dev/null
      "

  # OpenVSwitch Daemon
  ovs-vswitchd:
    image: openvswitch/ovs:latest
    privileged: true
    networks:
      - ovs-control
    volumes:
      - ovs-db-data:/var/lib/openvswitch
      - /sys/fs/cgroup:/sys/fs/cgroup:ro
    depends_on:
      - ovs-db
    command: |
      sh -c "
        ovs-vswitchd --pidfile --detach
        tail -f /dev/null
      "

  # SDN Controller (Floodlight)
  sdn-controller:
    image: floodlight/floodlight:latest
    ports:
      - "8080:8080"
      - "6653:6653"
    networks:
      - ovs-control
    volumes:
      - ./floodlight/floodlightdefault.properties:/opt/floodlight/src/main/resources/floodlightdefault.properties:ro

  # Application containers using OVS
  app1:
    image: alpine
    networks:
      - ovs-network
    command: sleep 3600

  app2:
    image: alpine
    networks:
      - ovs-network
    command: sleep 3600

networks:
  ovs-control:
    driver: bridge
  ovs-network:
    driver: ovs
    driver_opts:
      ovs.bridge.name: br-ovs
      ovs.bridge.controller: tcp:sdn-controller:6653

volumes:
  ovs-db-data:

Network Function Virtualization

# docker-compose.nfv.yml
version: '3.8'

services:
  # Virtual Firewall
  virtual-firewall:
    build: ./nfv/firewall
    privileged: true
    networks:
      - nfv-mgmt
      - nfv-data
    volumes:
      - ./firewall/rules.conf:/etc/firewall/rules.conf:ro
    environment:
      - INTERFACES=eth0,eth1
      - RULES_FILE=/etc/firewall/rules.conf

  # Virtual Load Balancer
  virtual-lb:
    build: ./nfv/loadbalancer
    networks:
      - nfv-mgmt
      - nfv-data
    volumes:
      - ./lb/haproxy.cfg:/etc/haproxy/haproxy.cfg:ro
    ports:
      - "80:80"
      - "443:443"

  # Virtual Router
  virtual-router:
    build: ./nfv/router
    privileged: true
    networks:
      - nfv-mgmt
      - nfv-data
      - external
    volumes:
      - ./router/quagga.conf:/etc/quagga/quagga.conf:ro
    sysctls:
      - net.ipv4.ip_forward=1

  # Network Monitoring
  network-monitor:
    image: prom/prometheus
    networks:
      - nfv-mgmt
    volumes:
      - ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus-data:/prometheus

  # Service Orchestrator
  nfv-orchestrator:
    build: ./nfv/orchestrator
    networks:
      - nfv-mgmt
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./orchestrator/config.yaml:/etc/orchestrator/config.yaml:ro
    environment:
      - DOCKER_HOST=unix:///var/run/docker.sock

networks:
  nfv-mgmt:
    driver: bridge
    ipam:
      config:
        - subnet: 10.0.1.0/24
  nfv-data:
    driver: bridge
    ipam:
      config:
        - subnet: 10.0.2.0/24
  external:
    driver: bridge

volumes:
  prometheus-data:

Advanced Storage Drivers

Custom Storage Plugin

// storage-plugin/main.go
package main

import (
    "encoding/json"
    "fmt"
    "net/http"
    "os"
    "path/filepath"
    
    "github.com/docker/go-plugins-helpers/volume"
)

type CustomVolumeDriver struct {
    volumes map[string]*VolumeState
    root    string
}

type VolumeState struct {
    Name       string
    Path       string
    Options    map[string]string
    Mountpoint string
}

func (d *CustomVolumeDriver) Create(req *volume.CreateRequest) error {
    // Parse options
    volumeType := req.Options["type"]
    size := req.Options["size"]
    encryption := req.Options["encryption"]
    
    // Create volume directory
    volumePath := filepath.Join(d.root, req.Name)
    if err := os.MkdirAll(volumePath, 0755); err != nil {
        return err
    }
    
    // Apply volume-specific configuration
    switch volumeType {
    case "encrypted":
        if err := d.setupEncryption(volumePath, encryption); err != nil {
            return err
        }
    case "compressed":
        if err := d.setupCompression(volumePath); err != nil {
            return err
        }
    case "replicated":
        if err := d.setupReplication(volumePath, req.Options); err != nil {
            return err
        }
    }
    
    // Store volume state
    d.volumes[req.Name] = &VolumeState{
        Name:    req.Name,
        Path:    volumePath,
        Options: req.Options,
    }
    
    return nil
}

func (d *CustomVolumeDriver) Remove(req *volume.RemoveRequest) error {
    state, exists := d.volumes[req.Name]
    if !exists {
        return fmt.Errorf("volume %s not found", req.Name)
    }
    
    // Cleanup volume resources
    if err := d.cleanupVolume(state); err != nil {
        return err
    }
    
    // Remove directory
    if err := os.RemoveAll(state.Path); err != nil {
        return err
    }
    
    delete(d.volumes, req.Name)
    return nil
}

func (d *CustomVolumeDriver) Mount(req *volume.MountRequest) (*volume.MountResponse, error) {
    state, exists := d.volumes[req.Name]
    if !exists {
        return nil, fmt.Errorf("volume %s not found", req.Name)
    }
    
    // Prepare mount point
    mountpoint := filepath.Join("/mnt", req.Name)
    if err := os.MkdirAll(mountpoint, 0755); err != nil {
        return nil, err
    }
    
    // Mount based on volume type
    if err := d.mountVolume(state, mountpoint); err != nil {
        return nil, err
    }
    
    state.Mountpoint = mountpoint
    return &volume.MountResponse{Mountpoint: mountpoint}, nil
}

func (d *CustomVolumeDriver) Unmount(req *volume.UnmountRequest) error {
    state, exists := d.volumes[req.Name]
    if !exists {
        return fmt.Errorf("volume %s not found", req.Name)
    }
    
    // Unmount volume
    if err := d.unmountVolume(state); err != nil {
        return err
    }
    
    state.Mountpoint = ""
    return nil
}

func (d *CustomVolumeDriver) Path(req *volume.PathRequest) (*volume.PathResponse, error) {
    state, exists := d.volumes[req.Name]
    if !exists {
        return nil, fmt.Errorf("volume %s not found", req.Name)
    }
    
    return &volume.PathResponse{Mountpoint: state.Mountpoint}, nil
}

func (d *CustomVolumeDriver) Get(req *volume.GetRequest) (*volume.GetResponse, error) {
    state, exists := d.volumes[req.Name]
    if !exists {
        return nil, fmt.Errorf("volume %s not found", req.Name)
    }
    
    return &volume.GetResponse{
        Volume: &volume.Volume{
            Name:       state.Name,
            Mountpoint: state.Mountpoint,
        },
    }, nil
}

func (d *CustomVolumeDriver) List() (*volume.ListResponse, error) {
    var volumes []*volume.Volume
    
    for _, state := range d.volumes {
        volumes = append(volumes, &volume.Volume{
            Name:       state.Name,
            Mountpoint: state.Mountpoint,
        })
    }
    
    return &volume.ListResponse{Volumes: volumes}, nil
}

func (d *CustomVolumeDriver) Capabilities() *volume.CapabilitiesResponse {
    return &volume.CapabilitiesResponse{
        Capabilities: volume.Capability{Scope: "local"},
    }
}

// Helper methods
func (d *CustomVolumeDriver) setupEncryption(path, algorithm string) error {
    // Implementation for encryption setup
    return nil
}

func (d *CustomVolumeDriver) setupCompression(path string) error {
    // Implementation for compression setup
    return nil
}

func (d *CustomVolumeDriver) setupReplication(path string, options map[string]string) error {
    // Implementation for replication setup
    return nil
}

func (d *CustomVolumeDriver) cleanupVolume(state *VolumeState) error {
    // Implementation for volume cleanup
    return nil
}

func (d *CustomVolumeDriver) mountVolume(state *VolumeState, mountpoint string) error {
    // Implementation for volume mounting
    return nil
}

func (d *CustomVolumeDriver) unmountVolume(state *VolumeState) error {
    // Implementation for volume unmounting
    return nil
}

func main() {
    driver := &CustomVolumeDriver{
        volumes: make(map[string]*VolumeState),
        root:    "/var/lib/custom-volumes",
    }
    
    handler := volume.NewHandler(driver)
    http.ListenAndServe(":8080", handler)
}

Container Storage Interface (CSI)

CSI Driver Implementation

// csi-driver/main.go
package main

import (
    "context"
    "fmt"
    "net"
    
    "github.com/container-storage-interface/spec/lib/go/csi"
    "google.golang.org/grpc"
)

type CSIDriver struct {
    name    string
    version string
    nodeID  string
}

// Identity Service
func (d *CSIDriver) GetPluginInfo(ctx context.Context, req *csi.GetPluginInfoRequest) (*csi.GetPluginInfoResponse, error) {
    return &csi.GetPluginInfoResponse{
        Name:          d.name,
        VendorVersion: d.version,
    }, nil
}

func (d *CSIDriver) GetPluginCapabilities(ctx context.Context, req *csi.GetPluginCapabilitiesRequest) (*csi.GetPluginCapabilitiesResponse, error) {
    return &csi.GetPluginCapabilitiesResponse{
        Capabilities: []*csi.PluginCapability{
            {
                Type: &csi.PluginCapability_Service_{
                    Service: &csi.PluginCapability_Service{
                        Type: csi.PluginCapability_Service_CONTROLLER_SERVICE,
                    },
                },
            },
        },
    }, nil
}

func (d *CSIDriver) Probe(ctx context.Context, req *csi.ProbeRequest) (*csi.ProbeResponse, error) {
    return &csi.ProbeResponse{}, nil
}

// Controller Service
func (d *CSIDriver) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
    volumeID := generateVolumeID()
    
    // Create volume based on parameters
    parameters := req.GetParameters()
    volumeType := parameters["type"]
    size := req.GetCapacityRange().GetRequiredBytes()
    
    volume, err := d.createVolumeBackend(volumeID, volumeType, size, parameters)
    if err != nil {
        return nil, err
    }
    
    return &csi.CreateVolumeResponse{
        Volume: &csi.Volume{
            VolumeId:      volumeID,
            CapacityBytes: size,
            VolumeContext: volume.Context,
        },
    }, nil
}

func (d *CSIDriver) DeleteVolume(ctx context.Context, req *csi.DeleteVolumeRequest) (*csi.DeleteVolumeResponse, error) {
    volumeID := req.GetVolumeId()
    
    if err := d.deleteVolumeBackend(volumeID); err != nil {
        return nil, err
    }
    
    return &csi.DeleteVolumeResponse{}, nil
}

func (d *CSIDriver) ControllerPublishVolume(ctx context.Context, req *csi.ControllerPublishVolumeRequest) (*csi.ControllerPublishVolumeResponse, error) {
    volumeID := req.GetVolumeId()
    nodeID := req.GetNodeId()
    
    publishContext, err := d.attachVolumeToNode(volumeID, nodeID)
    if err != nil {
        return nil, err
    }
    
    return &csi.ControllerPublishVolumeResponse{
        PublishContext: publishContext,
    }, nil
}

// Node Service
func (d *CSIDriver) NodeStageVolume(ctx context.Context, req *csi.NodeStageVolumeRequest) (*csi.NodeStageVolumeResponse, error) {
    volumeID := req.GetVolumeId()
    stagingPath := req.GetStagingTargetPath()
    
    if err := d.stageVolume(volumeID, stagingPath, req.GetPublishContext()); err != nil {
        return nil, err
    }
    
    return &csi.NodeStageVolumeResponse{}, nil
}

func (d *CSIDriver) NodePublishVolume(ctx context.Context, req *csi.NodePublishVolumeRequest) (*csi.NodePublishVolumeResponse, error) {
    volumeID := req.GetVolumeId()
    targetPath := req.GetTargetPath()
    stagingPath := req.GetStagingTargetPath()
    
    if err := d.publishVolume(volumeID, stagingPath, targetPath); err != nil {
        return nil, err
    }
    
    return &csi.NodePublishVolumeResponse{}, nil
}

func (d *CSIDriver) NodeGetCapabilities(ctx context.Context, req *csi.NodeGetCapabilitiesRequest) (*csi.NodeGetCapabilitiesResponse, error) {
    return &csi.NodeGetCapabilitiesResponse{
        Capabilities: []*csi.NodeServiceCapability{
            {
                Type: &csi.NodeServiceCapability_Rpc{
                    Rpc: &csi.NodeServiceCapability_RPC{
                        Type: csi.NodeServiceCapability_RPC_STAGE_UNSTAGE_VOLUME,
                    },
                },
            },
        },
    }, nil
}

func (d *CSIDriver) NodeGetInfo(ctx context.Context, req *csi.NodeGetInfoRequest) (*csi.NodeGetInfoResponse, error) {
    return &csi.NodeGetInfoResponse{
        NodeId: d.nodeID,
    }, nil
}

// Helper methods
func (d *CSIDriver) createVolumeBackend(volumeID, volumeType string, size int64, parameters map[string]string) (*VolumeInfo, error) {
    // Implementation for volume creation
    return &VolumeInfo{
        ID:      volumeID,
        Context: map[string]string{"type": volumeType},
    }, nil
}

func (d *CSIDriver) deleteVolumeBackend(volumeID string) error {
    // Implementation for volume deletion
    return nil
}

func (d *CSIDriver) attachVolumeToNode(volumeID, nodeID string) (map[string]string, error) {
    // Implementation for volume attachment
    return map[string]string{"device": "/dev/sdb"}, nil
}

func (d *CSIDriver) stageVolume(volumeID, stagingPath string, publishContext map[string]string) error {
    // Implementation for volume staging
    return nil
}

func (d *CSIDriver) publishVolume(volumeID, stagingPath, targetPath string) error {
    // Implementation for volume publishing
    return nil
}

type VolumeInfo struct {
    ID      string
    Context map[string]string
}

func generateVolumeID() string {
    return fmt.Sprintf("vol-%d", time.Now().Unix())
}

func main() {
    driver := &CSIDriver{
        name:    "custom.csi.driver",
        version: "1.0.0",
        nodeID:  "node-1",
    }
    
    listener, err := net.Listen("unix", "/tmp/csi.sock")
    if err != nil {
        panic(err)
    }
    
    server := grpc.NewServer()
    csi.RegisterIdentityServer(server, driver)
    csi.RegisterControllerServer(server, driver)
    csi.RegisterNodeServer(server, driver)
    
    server.Serve(listener)
}

Summary

This section covered advanced networking and storage techniques:

Custom Network Solutions

  • CNI Plugins: Container Network Interface plugin development
  • Docker Network Plugins: Custom network drivers for Docker
  • Software-Defined Networking: OpenVSwitch and SDN controller integration
  • Network Function Virtualization: Virtual firewalls, load balancers, and routers

Advanced Storage Systems

  • Custom Volume Drivers: Docker volume plugin development with encryption and replication
  • CSI Implementation: Container Storage Interface driver for Kubernetes integration
  • Storage Orchestration: Automated storage provisioning and management

Enterprise Patterns

  • Plugin Architecture: Extensible networking and storage solutions
  • Service Orchestration: Automated network function deployment
  • Performance Optimization: Advanced tuning and monitoring capabilities
  • Security Integration: Encryption, access control, and compliance features

Next Steps: Part 5 demonstrates complete production implementations combining all these advanced techniques into enterprise-ready networking and storage solutions.

Best Practices and Optimization

Docker Networking and Storage: Best Practices and Optimization

This final section demonstrates production-ready networking and storage implementations, combining security, performance, and operational excellence into comprehensive enterprise solutions.

Production Network Architecture

Enterprise Multi-Tier Network Design

# docker-compose.enterprise-network.yml
version: '3.8'

services:
  # DMZ Layer - External Access
  edge-proxy:
    image: traefik:v2.9
    ports:
      - "80:80"
      - "443:443"
      - "8080:8080"
    networks:
      - dmz
      - monitoring
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./traefik/traefik.yml:/etc/traefik/traefik.yml:ro
      - ./traefik/dynamic:/etc/traefik/dynamic:ro
      - traefik_certs:/certs
    environment:
      - TRAEFIK_CERTIFICATESRESOLVERS_LETSENCRYPT_ACME_EMAIL=admin@example.com
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.dashboard.rule=Host(`traefik.example.com`)"

  # Web Tier - Application Frontend
  web-frontend:
    image: nginx:alpine
    networks:
      - dmz
      - web-tier
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
      - web-logs:/var/log/nginx
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.web.rule=Host(`app.example.com`)"
      - "traefik.http.routers.web.tls.certresolver=letsencrypt"
    deploy:
      replicas: 3
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M

  # Application Tier - Business Logic
  app-backend:
    build: ./backend
    networks:
      - web-tier
      - app-tier
    volumes:
      - app-data:/app/data
      - app-logs:/app/logs
      - app-config:/app/config:ro
    environment:
      - DATABASE_URL=postgresql://app:${DB_PASSWORD}@db-primary:5432/appdb
      - REDIS_URL=redis://redis-cluster:6379/0
      - LOG_LEVEL=info
    secrets:
      - db_password
      - jwt_secret
    deploy:
      replicas: 5
      resources:
        limits:
          memory: 2G
        reservations:
          memory: 1G

  # Data Tier - Database Services
  db-primary:
    image: postgres:14
    networks:
      - app-tier
      - db-tier
    volumes:
      - postgres-primary-data:/var/lib/postgresql/data
      - postgres-primary-wal:/var/lib/postgresql/wal
      - postgres-config:/etc/postgresql:ro
    environment:
      - POSTGRES_DB=appdb
      - POSTGRES_USER=app
      - POSTGRES_PASSWORD_FILE=/run/secrets/db_password
      - POSTGRES_REPLICATION_MODE=master
      - POSTGRES_REPLICATION_USER=replicator
      - POSTGRES_REPLICATION_PASSWORD_FILE=/run/secrets/replication_password
    secrets:
      - db_password
      - replication_password
    command: |
      postgres
      -c config_file=/etc/postgresql/postgresql.conf
      -c hba_file=/etc/postgresql/pg_hba.conf

  db-replica:
    image: postgres:14
    networks:
      - db-tier
    volumes:
      - postgres-replica-data:/var/lib/postgresql/data
    environment:
      - POSTGRES_REPLICATION_MODE=slave
      - POSTGRES_MASTER_HOST=db-primary
      - POSTGRES_REPLICATION_USER=replicator
      - POSTGRES_REPLICATION_PASSWORD_FILE=/run/secrets/replication_password
    secrets:
      - replication_password
    depends_on:
      - db-primary

  # Cache Tier
  redis-cluster:
    image: redis:7-alpine
    networks:
      - app-tier
    volumes:
      - redis-data:/data
    command: |
      redis-server
      --cluster-enabled yes
      --cluster-config-file nodes.conf
      --cluster-node-timeout 5000
      --appendonly yes
      --maxmemory 4gb
      --maxmemory-policy allkeys-lru

  # Message Queue
  rabbitmq:
    image: rabbitmq:3-management
    networks:
      - app-tier
    volumes:
      - rabbitmq-data:/var/lib/rabbitmq
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS_FILE=/run/secrets/rabbitmq_password
    secrets:
      - rabbitmq_password

  # Monitoring and Logging
  prometheus:
    image: prom/prometheus:latest
    networks:
      - monitoring
    volumes:
      - prometheus-data:/prometheus
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.retention.time=30d'

  grafana:
    image: grafana/grafana:latest
    networks:
      - monitoring
    volumes:
      - grafana-data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning:ro
    environment:
      - GF_SECURITY_ADMIN_PASSWORD_FILE=/run/secrets/grafana_password
    secrets:
      - grafana_password

networks:
  # DMZ - External facing services
  dmz:
    driver: bridge
    ipam:
      config:
        - subnet: 172.30.1.0/24
          gateway: 172.30.1.1
    driver_opts:
      com.docker.network.bridge.name: dmz-bridge
      com.docker.network.bridge.enable_icc: "false"

  # Web Tier - Frontend services
  web-tier:
    driver: bridge
    ipam:
      config:
        - subnet: 172.30.2.0/24
    driver_opts:
      com.docker.network.bridge.enable_icc: "true"

  # Application Tier - Business logic
  app-tier:
    driver: bridge
    internal: true
    ipam:
      config:
        - subnet: 172.30.3.0/24

  # Database Tier - Data services
  db-tier:
    driver: bridge
    internal: true
    ipam:
      config:
        - subnet: 172.30.4.0/24

  # Monitoring - Observability services
  monitoring:
    driver: bridge
    ipam:
      config:
        - subnet: 172.30.5.0/24

volumes:
  traefik_certs:
  web-logs:
  app-data:
  app-logs:
  app-config:
  postgres-primary-data:
    driver: local
    driver_opts:
      type: ext4
      o: noatime,nodiratime
  postgres-primary-wal:
    driver: local
    driver_opts:
      type: ext4
      o: noatime,sync
  postgres-replica-data:
  postgres-config:
  redis-data:
  rabbitmq-data:
  prometheus-data:
  grafana-data:

secrets:
  db_password:
    external: true
  replication_password:
    external: true
  jwt_secret:
    external: true
  rabbitmq_password:
    external: true
  grafana_password:
    external: true

High-Performance Storage Architecture

Enterprise Storage Solution

# docker-compose.enterprise-storage.yml
version: '3.8'

services:
  # Storage Controller
  storage-controller:
    build: ./storage-controller
    privileged: true
    networks:
      - storage-mgmt
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - storage-controller-data:/var/lib/controller
      - ./storage/config.yaml:/etc/storage/config.yaml:ro
    environment:
      - STORAGE_BACKEND=ceph
      - REPLICATION_FACTOR=3
      - ENCRYPTION_ENABLED=true

  # Ceph Monitor Cluster
  ceph-mon-1:
    image: ceph/ceph:latest
    networks:
      storage-cluster:
        ipv4_address: 172.31.1.10
    volumes:
      - ceph-mon-1-data:/var/lib/ceph/mon
      - ceph-config:/etc/ceph
    environment:
      - MON_IP=172.31.1.10
      - CEPH_PUBLIC_NETWORK=172.31.1.0/24
      - CEPH_CLUSTER_NETWORK=172.31.2.0/24
    command: mon

  ceph-mon-2:
    image: ceph/ceph:latest
    networks:
      storage-cluster:
        ipv4_address: 172.31.1.11
    volumes:
      - ceph-mon-2-data:/var/lib/ceph/mon
      - ceph-config:/etc/ceph
    environment:
      - MON_IP=172.31.1.11
      - CEPH_PUBLIC_NETWORK=172.31.1.0/24
    command: mon

  ceph-mon-3:
    image: ceph/ceph:latest
    networks:
      storage-cluster:
        ipv4_address: 172.31.1.12
    volumes:
      - ceph-mon-3-data:/var/lib/ceph/mon
      - ceph-config:/etc/ceph
    environment:
      - MON_IP=172.31.1.12
      - CEPH_PUBLIC_NETWORK=172.31.1.0/24
    command: mon

  # Ceph OSD Cluster
  ceph-osd-1:
    image: ceph/ceph:latest
    privileged: true
    networks:
      - storage-cluster
    volumes:
      - ceph-osd-1-data:/var/lib/ceph/osd
      - ceph-config:/etc/ceph
      - /dev:/dev
    environment:
      - OSD_DEVICE=/dev/sdb
      - OSD_TYPE=bluestore
    depends_on:
      - ceph-mon-1
    command: osd

  ceph-osd-2:
    image: ceph/ceph:latest
    privileged: true
    networks:
      - storage-cluster
    volumes:
      - ceph-osd-2-data:/var/lib/ceph/osd
      - ceph-config:/etc/ceph
      - /dev:/dev
    environment:
      - OSD_DEVICE=/dev/sdc
      - OSD_TYPE=bluestore
    depends_on:
      - ceph-mon-1
    command: osd

  ceph-osd-3:
    image: ceph/ceph:latest
    privileged: true
    networks:
      - storage-cluster
    volumes:
      - ceph-osd-3-data:/var/lib/ceph/osd
      - ceph-config:/etc/ceph
      - /dev:/dev
    environment:
      - OSD_DEVICE=/dev/sdd
      - OSD_TYPE=bluestore
    depends_on:
      - ceph-mon-1
    command: osd

  # Ceph Manager
  ceph-mgr:
    image: ceph/ceph:latest
    networks:
      - storage-cluster
      - storage-mgmt
    volumes:
      - ceph-mgr-data:/var/lib/ceph/mgr
      - ceph-config:/etc/ceph
    ports:
      - "8443:8443"  # Dashboard
    depends_on:
      - ceph-mon-1
      - ceph-mon-2
      - ceph-mon-3
    command: mgr

  # Storage Gateway (RBD/CephFS)
  ceph-gateway:
    image: ceph/ceph:latest
    networks:
      - storage-cluster
      - app-storage
    volumes:
      - ceph-config:/etc/ceph
    depends_on:
      - ceph-mgr
    command: |
      sh -c "
        rbd create --size 10G mypool/volume1
        rbd create --size 20G mypool/volume2
        ceph-fuse /mnt/cephfs
      "

  # Application using Ceph storage
  database:
    image: postgres:14
    networks:
      - app-storage
    volumes:
      - type: volume
        source: postgres-data
        target: /var/lib/postgresql/data
        volume:
          driver: rbd
          driver_opts:
            pool: mypool
            image: postgres-data
            size: 50G
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=password

networks:
  storage-mgmt:
    driver: bridge
  storage-cluster:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.1.0/24  # Public network
        - subnet: 172.31.2.0/24  # Cluster network
  app-storage:
    driver: bridge

volumes:
  storage-controller-data:
  ceph-mon-1-data:
  ceph-mon-2-data:
  ceph-mon-3-data:
  ceph-osd-1-data:
  ceph-osd-2-data:
  ceph-osd-3-data:
  ceph-mgr-data:
  ceph-config:
  postgres-data:
    external: true

Security and Compliance Framework

Zero-Trust Network Implementation

#!/usr/bin/env python3
# network-security-manager.py

import docker
import json
import subprocess
from typing import Dict, List
import yaml

class NetworkSecurityManager:
    def __init__(self, config_path: str):
        self.client = docker.from_env()
        with open(config_path, 'r') as f:
            self.config = yaml.safe_load(f)
    
    def apply_network_policies(self):
        """Apply network security policies"""
        for policy in self.config['network_policies']:
            self.create_network_policy(policy)
    
    def create_network_policy(self, policy: Dict):
        """Create iptables rules for network policy"""
        rules = []
        
        # Default deny all
        if policy.get('default_action') == 'deny':
            rules.append("iptables -P FORWARD DROP")
        
        # Allow specific traffic
        for rule in policy.get('allow_rules', []):
            source = rule['source']
            destination = rule['destination']
            port = rule.get('port', 'any')
            protocol = rule.get('protocol', 'tcp')
            
            if port == 'any':
                iptables_rule = f"iptables -A FORWARD -s {source} -d {destination} -p {protocol} -j ACCEPT"
            else:
                iptables_rule = f"iptables -A FORWARD -s {source} -d {destination} -p {protocol} --dport {port} -j ACCEPT"
            
            rules.append(iptables_rule)
        
        # Apply rules
        for rule in rules:
            subprocess.run(rule.split(), check=True)
    
    def setup_container_isolation(self):
        """Setup container network isolation"""
        for container in self.client.containers.list():
            labels = container.labels
            security_level = labels.get('security.level', 'standard')
            
            if security_level == 'high':
                self.apply_high_security_rules(container)
            elif security_level == 'medium':
                self.apply_medium_security_rules(container)
    
    def apply_high_security_rules(self, container):
        """Apply high security network rules"""
        container_ip = self.get_container_ip(container)
        
        # Block all outbound except specific services
        allowed_services = ['dns', 'ntp', 'logging']
        for service in allowed_services:
            service_ip = self.config['services'][service]['ip']
            service_port = self.config['services'][service]['port']
            
            subprocess.run([
                'iptables', '-A', 'FORWARD',
                '-s', container_ip,
                '-d', service_ip,
                '-p', 'tcp',
                '--dport', str(service_port),
                '-j', 'ACCEPT'
            ])
        
        # Block everything else
        subprocess.run([
            'iptables', '-A', 'FORWARD',
            '-s', container_ip,
            '-j', 'DROP'
        ])
    
    def get_container_ip(self, container) -> str:
        """Get container IP address"""
        networks = container.attrs['NetworkSettings']['Networks']
        for network_name, network_info in networks.items():
            if network_info['IPAddress']:
                return network_info['IPAddress']
        return None
    
    def monitor_network_traffic(self):
        """Monitor and log network traffic"""
        # Setup traffic monitoring
        subprocess.run([
            'iptables', '-A', 'FORWARD',
            '-j', 'LOG',
            '--log-prefix', 'DOCKER-TRAFFIC: ',
            '--log-level', '4'
        ])
    
    def generate_security_report(self) -> Dict:
        """Generate network security compliance report"""
        report = {
            'timestamp': datetime.now().isoformat(),
            'containers': [],
            'networks': [],
            'violations': []
        }
        
        # Analyze containers
        for container in self.client.containers.list():
            container_info = {
                'id': container.id,
                'name': container.name,
                'image': container.image.tags[0] if container.image.tags else 'unknown',
                'networks': list(container.attrs['NetworkSettings']['Networks'].keys()),
                'security_level': container.labels.get('security.level', 'unknown'),
                'compliance_status': self.check_container_compliance(container)
            }
            report['containers'].append(container_info)
        
        # Analyze networks
        for network in self.client.networks.list():
            network_info = {
                'id': network.id,
                'name': network.name,
                'driver': network.attrs['Driver'],
                'internal': network.attrs.get('Internal', False),
                'encrypted': network.attrs.get('Options', {}).get('encrypted', False),
                'containers': len(network.containers)
            }
            report['networks'].append(network_info)
        
        return report
    
    def check_container_compliance(self, container) -> Dict:
        """Check container network compliance"""
        violations = []
        
        # Check if container has required security labels
        required_labels = ['security.level', 'security.owner', 'security.classification']
        for label in required_labels:
            if label not in container.labels:
                violations.append(f"Missing required label: {label}")
        
        # Check network configuration
        networks = container.attrs['NetworkSettings']['Networks']
        for network_name in networks:
            if network_name == 'bridge' and container.labels.get('security.level') == 'high':
                violations.append("High security container on default bridge network")
        
        return {
            'compliant': len(violations) == 0,
            'violations': violations
        }

# Configuration file example
config_example = {
    'network_policies': [
        {
            'name': 'web-tier-policy',
            'default_action': 'deny',
            'allow_rules': [
                {
                    'source': '172.30.1.0/24',  # DMZ
                    'destination': '172.30.2.0/24',  # Web tier
                    'port': 80,
                    'protocol': 'tcp'
                },
                {
                    'source': '172.30.2.0/24',  # Web tier
                    'destination': '172.30.3.0/24',  # App tier
                    'port': 8080,
                    'protocol': 'tcp'
                }
            ]
        }
    ],
    'services': {
        'dns': {'ip': '8.8.8.8', 'port': 53},
        'ntp': {'ip': 'pool.ntp.org', 'port': 123},
        'logging': {'ip': '172.30.5.10', 'port': 514}
    }
}

if __name__ == "__main__":
    # Save example config
    with open('/tmp/security-config.yaml', 'w') as f:
        yaml.dump(config_example, f)
    
    # Initialize security manager
    manager = NetworkSecurityManager('/tmp/security-config.yaml')
    
    # Apply security policies
    manager.apply_network_policies()
    manager.setup_container_isolation()
    manager.monitor_network_traffic()
    
    # Generate report
    report = manager.generate_security_report()
    with open('security-report.json', 'w') as f:
        json.dump(report, f, indent=2)

Performance Monitoring and Optimization

Comprehensive Monitoring Stack

# docker-compose.monitoring.yml
version: '3.8'

services:
  # Network Performance Monitor
  network-monitor:
    build: ./monitoring/network
    privileged: true
    networks:
      - monitoring
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    environment:
      - MONITOR_INTERFACES=eth0,docker0
      - ALERT_THRESHOLD_BANDWIDTH=80
      - ALERT_THRESHOLD_LATENCY=100ms

  # Storage Performance Monitor
  storage-monitor:
    build: ./monitoring/storage
    privileged: true
    networks:
      - monitoring
    volumes:
      - /:/rootfs:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
    environment:
      - MONITOR_VOLUMES=all
      - ALERT_THRESHOLD_IOPS=1000
      - ALERT_THRESHOLD_LATENCY=10ms

  # Prometheus with custom metrics
  prometheus:
    image: prom/prometheus:latest
    networks:
      - monitoring
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - ./prometheus/rules:/etc/prometheus/rules:ro
      - prometheus-data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.retention.time=30d'
      - '--web.enable-lifecycle'

  # Grafana with custom dashboards
  grafana:
    image: grafana/grafana:latest
    networks:
      - monitoring
    volumes:
      - grafana-data:/var/lib/grafana
      - ./grafana/dashboards:/etc/grafana/provisioning/dashboards:ro
      - ./grafana/datasources:/etc/grafana/provisioning/datasources:ro
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
      - GF_INSTALL_PLUGINS=grafana-piechart-panel,grafana-worldmap-panel

  # Alert Manager
  alertmanager:
    image: prom/alertmanager:latest
    networks:
      - monitoring
    volumes:
      - ./alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
      - alertmanager-data:/alertmanager

networks:
  monitoring:
    driver: bridge

volumes:
  prometheus-data:
  grafana-data:
  alertmanager-data:

Summary

This comprehensive Docker Networking and Storage guide has covered:

Foundation to Enterprise

  • Basic Concepts: Network types, storage drivers, and fundamental operations
  • Core Techniques: Advanced networking patterns, storage optimization, and security
  • Practical Applications: Microservices architectures, high-availability setups, and distributed storage
  • Advanced Patterns: Custom plugins, SDN integration, and enterprise solutions

Production Excellence

  • Enterprise Architecture: Multi-tier networks with proper isolation and security
  • High-Performance Storage: Distributed storage with Ceph and advanced optimization
  • Security Framework: Zero-trust networking with comprehensive monitoring
  • Operational Excellence: Performance monitoring, compliance reporting, and automation

Key Achievements

You now have the expertise to:

  1. Design Enterprise Networks: Scalable, secure, and high-performance network architectures
  2. Implement Advanced Storage: Distributed, encrypted, and high-availability storage solutions
  3. Ensure Security: Zero-trust networking with comprehensive policy enforcement
  4. Optimize Performance: Advanced tuning and monitoring for production workloads
  5. Maintain Compliance: Automated security scanning and compliance reporting

Congratulations! You’ve mastered Docker networking and storage from basic concepts to enterprise-grade implementations. You can now design, implement, and operate production-ready containerized infrastructure that meets the highest standards of performance, security, and reliability.