Practical Applications and Examples

The real test of Docker image management comes when you’re building images for actual applications. I’ve containerized everything from simple web services to complex machine learning pipelines, and each application type has taught me something new about image optimization and management.

The most valuable lesson I’ve learned: there’s no one-size-fits-all approach to Docker images. A Node.js API needs different optimization than a Python data processing job, and a static website has completely different requirements than a database.

Web Application Images

Web applications are where I first learned Docker, and they remain the most common use case. Here’s how I build images for different web frameworks:

Node.js Application:

# Multi-stage build for Node.js app
FROM node:18-alpine AS base
RUN apk add --no-cache libc6-compat
WORKDIR /app

# Dependencies stage
FROM base AS deps
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
  if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
  elif [ -f package-lock.json ]; then npm ci; \
  elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i; \
  else echo "Lockfile not found." && exit 1; \
  fi

# Build stage
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

# Production stage
FROM base AS runner
WORKDIR /app

ENV NODE_ENV production

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json

USER nextjs
EXPOSE 3000
CMD ["node", "dist/server.js"]

This pattern works for most Node.js applications and typically produces images under 100MB.

Python Flask Application:

FROM python:3.11-slim AS base

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /app

# Dependencies stage
FROM base AS deps
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Production stage
FROM python:3.11-slim AS runtime
WORKDIR /app

# Copy Python dependencies
COPY --from=deps /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --from=deps /usr/local/bin /usr/local/bin

# Copy application
COPY . .

# Create non-root user
RUN useradd --create-home --shell /bin/bash app
USER app

EXPOSE 5000
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "app:app"]

The key insight for Python applications: separate dependency installation from the runtime image to avoid including build tools.

Database and Stateful Service Images

Databases require special consideration for data persistence and initialization. Here’s how I handle PostgreSQL with custom configuration:

FROM postgres:15-alpine

# Install additional extensions
RUN apk add --no-cache \
    postgresql-contrib \
    postgresql-plpython3

# Copy initialization scripts
COPY ./init-scripts/ /docker-entrypoint-initdb.d/

# Copy custom configuration
COPY postgresql.conf /etc/postgresql/postgresql.conf
COPY pg_hba.conf /etc/postgresql/pg_hba.conf

# Set custom configuration
ENV POSTGRES_CONFIG_FILE=/etc/postgresql/postgresql.conf

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
  CMD pg_isready -U ${POSTGRES_USER:-postgres} -d ${POSTGRES_DB:-postgres}

EXPOSE 5432

For Redis with custom modules:

FROM redis:7-alpine AS base

# Build stage for Redis modules
FROM base AS builder
RUN apk add --no-cache \
    build-base \
    git

WORKDIR /tmp
RUN git clone https://github.com/RedisJSON/RedisJSON.git
WORKDIR /tmp/RedisJSON
RUN make

# Runtime stage
FROM base AS runtime
COPY --from=builder /tmp/RedisJSON/bin/linux-x64-release/rejson.so /usr/local/lib/
COPY redis.conf /usr/local/etc/redis/redis.conf

CMD ["redis-server", "/usr/local/etc/redis/redis.conf"]

Microservices Architecture Images

Managing images for microservices requires consistency across services while allowing for service-specific optimizations. I use a base image approach:

Base service image:

# base-service.dockerfile
FROM node:18-alpine AS base

# Common system dependencies
RUN apk add --no-cache \
    dumb-init \
    curl \
    && addgroup -g 1001 -S nodejs \
    && adduser -S service -u 1001

WORKDIR /app

# Common health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:${PORT:-3000}/health || exit 1

USER service

Service-specific image:

FROM base-service:latest

# Service-specific dependencies
COPY package*.json ./
RUN npm ci --only=production

# Copy service code
COPY . .

ENV PORT=3000
EXPOSE 3000

CMD ["dumb-init", "node", "index.js"]

This approach ensures consistency while allowing services to have their own optimization.

CI/CD Pipeline Integration

I integrate image building into CI/CD pipelines with these patterns:

GitHub Actions workflow:

name: Build and Push Image

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v2
    
    - name: Login to Registry
      uses: docker/login-action@v2
      with:
        registry: ghcr.io
        username: ${{ github.actor }}
        password: ${{ secrets.GITHUB_TOKEN }}
    
    - name: Extract metadata
      id: meta
      uses: docker/metadata-action@v4
      with:
        images: ghcr.io/${{ github.repository }}
        tags: |
          type=ref,event=branch
          type=ref,event=pr
          type=sha,prefix={{branch}}-
    
    - name: Build and push
      uses: docker/build-push-action@v4
      with:
        context: .
        push: true
        tags: ${{ steps.meta.outputs.tags }}
        labels: ${{ steps.meta.outputs.labels }}
        cache-from: type=gha
        cache-to: type=gha,mode=max

GitLab CI pipeline:

stages:
  - build
  - test
  - deploy

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: "/certs"

build:
  stage: build
  image: docker:20.10.16
  services:
    - docker:20.10.16-dind
  before_script:
    - echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER --password-stdin $CI_REGISTRY
  script:
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
  only:
    - main
    - develop

Development Environment Images

Development images need different capabilities than production images. I create development-specific images that include debugging tools:

# Development image
FROM node:18-alpine AS development

# Install development tools
RUN apk add --no-cache \
    git \
    vim \
    curl \
    htop \
    bash

WORKDIR /app

# Install all dependencies (including dev)
COPY package*.json ./
RUN npm install

# Copy source (will be overridden by volume in development)
COPY . .

# Development server with hot reload
CMD ["npm", "run", "dev"]

# Production image
FROM node:18-alpine AS production

WORKDIR /app

# Production dependencies only
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

COPY . .
RUN npm run build

USER node
CMD ["npm", "start"]

Docker Compose for development:

version: '3.8'
services:
  app:
    build:
      context: .
      target: development
    ports:
      - "3000:3000"
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
    depends_on:
      - db
      - redis

  db:
    image: postgres:15-alpine
    environment:
      POSTGRES_DB: myapp_dev
      POSTGRES_USER: dev
      POSTGRES_PASSWORD: dev
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

volumes:
  postgres_data:

Image Testing and Validation

I test images before deploying them to catch issues early:

#!/bin/bash
# test-image.sh

IMAGE_NAME=${1:-myapp:latest}

echo "Testing image: $IMAGE_NAME"

# Test 1: Image builds successfully
if ! docker build -t "$IMAGE_NAME" .; then
    echo "ERROR: Image build failed"
    exit 1
fi

# Test 2: Container starts successfully
CONTAINER_ID=$(docker run -d "$IMAGE_NAME")
sleep 5

if ! docker ps | grep -q "$CONTAINER_ID"; then
    echo "ERROR: Container failed to start"
    docker logs "$CONTAINER_ID"
    exit 1
fi

# Test 3: Health check passes
if ! docker exec "$CONTAINER_ID" curl -f http://localhost:3000/health; then
    echo "ERROR: Health check failed"
    docker logs "$CONTAINER_ID"
    exit 1
fi

# Test 4: Check image size
SIZE=$(docker images "$IMAGE_NAME" --format "{{.Size}}")
echo "Image size: $SIZE"

# Cleanup
docker stop "$CONTAINER_ID"
docker rm "$CONTAINER_ID"

echo "All tests passed!"

Multi-Architecture Images

Building images that work on different architectures (AMD64, ARM64) is increasingly important:

# Use buildx for multi-arch builds
FROM --platform=$BUILDPLATFORM node:18-alpine AS base
ARG TARGETPLATFORM
ARG BUILDPLATFORM

WORKDIR /app

# Dependencies stage
FROM base AS deps
COPY package*.json ./
RUN npm ci --only=production

# Build stage
FROM base AS builder
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Runtime stage
FROM node:18-alpine AS runtime
WORKDIR /app

COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY package.json ./

CMD ["node", "dist/index.js"]

Build command for multi-arch:

# Create and use buildx builder
docker buildx create --name multiarch --use

# Build for multiple architectures
docker buildx build \
  --platform linux/amd64,linux/arm64 \
  -t myregistry.com/myapp:latest \
  --push .

Image Monitoring and Maintenance

I monitor image usage and maintain them regularly:

#!/usr/bin/env python3
# image-maintenance.py

import docker
import json
from datetime import datetime, timedelta

client = docker.from_env()

def cleanup_old_images():
    """Remove images older than 30 days"""
    cutoff = datetime.now() - timedelta(days=30)
    
    for image in client.images.list():
        created = datetime.fromisoformat(image.attrs['Created'].replace('Z', '+00:00'))
        
        if created < cutoff and not image.tags:
            print(f"Removing old image: {image.id[:12]}")
            client.images.remove(image.id, force=True)

def check_image_vulnerabilities():
    """Check for known vulnerabilities"""
    for image in client.images.list():
        if image.tags:
            tag = image.tags[0]
            print(f"Checking {tag} for vulnerabilities...")
            # Integration with vulnerability scanner would go here

def generate_image_report():
    """Generate usage report"""
    report = {
        'total_images': len(client.images.list()),
        'total_size': sum(image.attrs['Size'] for image in client.images.list()),
        'images': []
    }
    
    for image in client.images.list():
        if image.tags:
            report['images'].append({
                'tag': image.tags[0],
                'size': image.attrs['Size'],
                'created': image.attrs['Created']
            })
    
    with open('image-report.json', 'w') as f:
        json.dump(report, f, indent=2)

if __name__ == "__main__":
    cleanup_old_images()
    check_image_vulnerabilities()
    generate_image_report()

These practical patterns have evolved from building and managing hundreds of different applications. They provide the foundation for reliable, efficient image management in real-world scenarios.

Next, we’ll explore advanced techniques including custom base images, image signing, and enterprise-grade image management strategies.