Container Fundamentals and Docker Basics
Containers transformed how I think about application deployment. After years of wrestling with “it works on my machine” problems, containers finally gave us a way to package applications with their entire runtime environment.
Why Containers Matter
Traditional deployment meant installing applications directly on servers, managing dependencies, and hoping everything worked together. I’ve seen production outages caused by a missing library version or conflicting Python packages. Containers solve this by packaging everything your application needs into a single, portable unit.
Think of containers as lightweight virtual machines, but more efficient. They share the host operating system kernel while maintaining isolation between applications.
# Check if Docker is running
docker --version
docker info
Understanding Container Images
Container images are the blueprints for containers. They contain your application code, runtime, system tools, libraries, and settings. Images are built in layers, which makes them efficient to store and transfer.
# Simple example - a basic web server
FROM nginx:alpine
COPY index.html /usr/share/nginx/html/
EXPOSE 80
This Dockerfile creates an image based on the lightweight Alpine Linux version of Nginx. The COPY
instruction adds your HTML file to the web server’s document root.
Building and running this container:
# Build the image
docker build -t my-web-server .
# Run a container from the image
docker run -d -p 8080:80 --name web my-web-server
The -d
flag runs the container in the background, -p 8080:80
maps port 8080 on your host to port 80 in the container.
Container Lifecycle Management
Containers have a simple lifecycle: created, running, stopped, or removed. Understanding this lifecycle helps you manage containers effectively.
# List running containers
docker ps
# Stop a running container
docker stop web
# Start a stopped container
docker start web
# Remove a container
docker rm web
I always use meaningful names for containers in development. It makes debugging much easier when you can identify containers by purpose rather than random IDs.
Working with Container Logs
Container logs are crucial for debugging. Docker captures everything your application writes to stdout and stderr.
# View container logs
docker logs web
# Follow logs in real-time
docker logs -f web
# Show only the last 50 lines
docker logs --tail 50 web
In production, I’ve learned to always log to stdout/stderr rather than files. This makes log aggregation much simpler and follows the twelve-factor app methodology.
Container Networking Basics
Containers can communicate with each other and the outside world through Docker’s networking system. By default, Docker creates a bridge network that allows containers to communicate.
# List Docker networks
docker network ls
# Create a custom network
docker network create my-network
# Run containers on the custom network
docker run -d --network my-network --name app1 nginx:alpine
docker run -d --network my-network --name app2 nginx:alpine
Custom networks provide better isolation and allow containers to communicate using container names as hostnames.
Volume Management for Data Persistence
Containers are ephemeral by design - when you remove a container, its data disappears. Volumes solve this by providing persistent storage that survives container restarts and removals.
# Create a named volume
docker volume create my-data
# Run a container with a volume mount
docker run -d -v my-data:/data --name data-container alpine sleep 3600
# List all volumes
docker volume ls
I prefer named volumes over bind mounts for production workloads because Docker manages them automatically and they work consistently across different host operating systems.
Resource Management and Limits
Containers share host resources, so it’s important to set appropriate limits to prevent one container from consuming all available CPU or memory.
# Run a container with resource limits
docker run -d \
--memory="512m" \
--cpus="1.0" \
--name limited-container \
nginx:alpine
# Check resource usage
docker stats limited-container
Setting resource limits prevents runaway containers from affecting other applications on the same host.
Container Image Management
Managing images efficiently becomes important as you work with more containers:
# List all images
docker images
# Remove an image
docker rmi nginx:alpine
# Remove unused images
docker image prune
I run docker system prune
regularly in development environments to keep disk usage under control.
What Makes Containers Different
Containers aren’t just lightweight VMs. They share the host kernel, which makes them much more efficient but also means they have different security and compatibility considerations.
Virtual machines virtualize hardware, while containers virtualize the operating system. This means containers start faster, use less memory, and allow higher density on the same hardware.
The trade-off is that all containers on a host share the same kernel. You can’t run Windows containers on a Linux host, and kernel-level security vulnerabilities affect all containers.
Understanding these fundamentals sets the foundation for everything else we’ll cover. In the next part, we’ll dive into writing effective Dockerfiles and building optimized images that are both secure and efficient.