Storage, Configuration, and Advanced Patterns
Now we’re getting into the stuff that separates toy projects from production applications. Storage, configuration management, and deployment patterns—these are the things that’ll make or break your Kubernetes experience when you’re running real workloads.
I’ve seen too many people get excited about Kubernetes, deploy a few stateless apps, then hit a wall when they need to handle databases, file uploads, or complex configuration. Let’s fix that.
Persistent Storage - Making Data Stick Around
The thing about containers is they’re ephemeral—when they die, everything inside them disappears. That’s great for stateless apps, but terrible for databases or anything that needs to store files. Kubernetes solves this with persistent volumes.
The Storage Hierarchy
Think of Kubernetes storage like this:
- Persistent Volume (PV) - The actual storage (like a hard drive)
- Persistent Volume Claim (PVC) - A request for storage (like asking for a 10GB drive)
- Volume Mount - Connecting the storage to your container (like plugging in a USB drive)
Creating Persistent Storage
Let’s say you’re running a blog and need to store uploaded images. Here’s how you’d set that up:
# storage.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: blog-images
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: fast-ssd # Optional: specify storage type
Then use it in your deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app
spec:
replicas: 1 # Note: ReadWriteOnce means only one pod can use this
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: blog
image: wordpress:latest
volumeMounts:
- name: image-storage
mountPath: /var/www/html/wp-content/uploads
env:
- name: WORDPRESS_DB_HOST
value: mysql-service
volumes:
- name: image-storage
persistentVolumeClaim:
claimName: blog-images
Storage Classes - Different Types of Storage
Not all storage is created equal. You might want fast SSD storage for databases and cheaper storage for backups:
# Check what storage classes are available
kubectl get storageclass
# Create a custom storage class (cloud provider specific)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/aws-ebs # AWS example
parameters:
type: gp3
iops: "3000"
allowVolumeExpansion: true
Shared Storage for Multiple Pods
Sometimes you need multiple pods to access the same storage. Use ReadWriteMany
access mode:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-files
spec:
accessModes:
- ReadWriteMany # Multiple pods can read/write
resources:
requests:
storage: 100Gi
Note: Not all storage providers support ReadWriteMany. Check your cloud provider’s documentation.
Configuration Management Done Right
Hard-coding configuration is the enemy of flexibility. Let’s look at better patterns for managing configuration in Kubernetes.
Environment-Specific Configuration
Create different ConfigMaps for different environments:
# config-dev.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: development
data:
database_host: "dev-postgres-service"
log_level: "debug"
cache_ttl: "60"
feature_flags: |
{
"new_ui": true,
"beta_features": true,
"analytics": false
}
---
# config-prod.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
database_host: "prod-postgres-service"
log_level: "warn"
cache_ttl: "3600"
feature_flags: |
{
"new_ui": true,
"beta_features": false,
"analytics": true
}
Configuration Files as Volumes
Sometimes you need entire configuration files, not just environment variables:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
upstream backend {
server api-service:80;
}
server {
listen 80;
location / {
root /usr/share/nginx/html;
}
location /api/ {
proxy_pass http://backend/;
}
}
}
Mount it as a file:
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: config
configMap:
name: nginx-config
Secrets Management - Keeping Things Secure
Secrets are like ConfigMaps but for sensitive data. They’re base64 encoded (not encrypted!) and have some additional security features.
Creating Secrets Properly
Don’t put secrets in your YAML files. Create them from the command line:
# From literal values
kubectl create secret generic app-secrets \
--from-literal=database-password=supersecret \
--from-literal=api-key=abc123xyz
# From files
kubectl create secret generic tls-certs \
--from-file=tls.crt=./server.crt \
--from-file=tls.key=./server.key
# From environment file
kubectl create secret generic env-secrets \
--from-env-file=.env
Using Secrets Securely
Mount secrets as files, not environment variables (they’re more secure that way):
spec:
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
env:
- name: SECRET_PATH
value: "/etc/secrets"
volumes:
- name: secrets
secret:
secretName: app-secrets
defaultMode: 0400 # Read-only for owner
Your application reads the secrets from files:
// In your Node.js app
const fs = require('fs');
const dbPassword = fs.readFileSync('/etc/secrets/database-password', 'utf8');
const apiKey = fs.readFileSync('/etc/secrets/api-key', 'utf8');
Advanced Deployment Patterns
Blue-Green Deployments
Deploy a new version alongside the old one, then switch traffic:
# Deploy green version
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-green
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: green
template:
metadata:
labels:
app: myapp
version: green
spec:
containers:
- name: app
image: myapp:v2.0.0
Switch the service to point to the new version:
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp
version: green # Change from blue to green
ports:
- port: 80
Canary Deployments
Gradually roll out to a percentage of users:
# 90% of traffic goes to stable version
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-stable
spec:
replicas: 9
selector:
matchLabels:
app: myapp
version: stable
---
# 10% of traffic goes to canary version
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-canary
spec:
replicas: 1
selector:
matchLabels:
app: myapp
version: canary
The service selects both versions:
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp # Selects both stable and canary
ports:
- port: 80
Resource Management and Limits
Setting Appropriate Limits
Don’t just guess at resource limits. Monitor your applications and set realistic values:
spec:
containers:
- name: app
image: myapp:latest
resources:
requests:
memory: "256Mi" # Guaranteed memory
cpu: "200m" # Guaranteed CPU (0.2 cores)
limits:
memory: "512Mi" # Maximum memory
cpu: "500m" # Maximum CPU (0.5 cores)
Quality of Service Classes
Kubernetes assigns QoS classes based on your resource configuration:
- Guaranteed - requests = limits (highest priority)
- Burstable - requests < limits (medium priority)
- BestEffort - no requests or limits (lowest priority)
For critical applications, use Guaranteed QoS:
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "256Mi" # Same as requests
cpu: "200m" # Same as requests
Horizontal Pod Autoscaling
Automatically scale based on CPU or memory usage:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
Init Containers - Setup Before Main Containers
Sometimes you need to do setup work before your main application starts:
spec:
initContainers:
- name: migration
image: myapp:latest
command: ['python', 'migrate.py']
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z postgres-service 5432; do sleep 1; done']
containers:
- name: app
image: myapp:latest
Init containers run to completion before the main containers start. They’re perfect for database migrations, downloading files, or waiting for dependencies.
Putting Advanced Patterns Together
Here’s how you might combine these patterns in a real application:
# Complete application with advanced patterns
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app
spec:
replicas: 3
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
initContainers:
- name: wait-for-db
image: busybox
command: ['sh', '-c', 'until nc -z postgres-service 5432; do sleep 1; done']
containers:
- name: app
image: wordpress:latest
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /wp-admin/install.php
port: 80
initialDelaySeconds: 60
periodSeconds: 30
readinessProbe:
httpGet:
path: /wp-admin/install.php
port: 80
initialDelaySeconds: 30
periodSeconds: 10
volumeMounts:
- name: uploads
mountPath: /var/www/html/wp-content/uploads
- name: config
mountPath: /etc/wordpress-config
readOnly: true
env:
- name: WORDPRESS_DB_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: database-host
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: database-password
volumes:
- name: uploads
persistentVolumeClaim:
claimName: blog-uploads
- name: config
configMap:
name: wordpress-config
This deployment includes persistent storage, proper configuration management, health checks, resource limits, and init containers. It’s production-ready.
In the next part, we’ll look at best practices for production deployments, monitoring, and troubleshooting—the operational side of running Kubernetes in the real world.