Introduction and Setup
Configuration management in Kubernetes nearly broke me when I first started trying to use it. I spent three days debugging why my application couldn’t connect to the database, only to discover I’d misspelled “postgres” as “postgress” in a ConfigMap. That typo taught me more about Kubernetes configuration than any documentation ever could.
The frustrating truth about Kubernetes configuration is that it looks simple until you need it to work reliably across environments. ConfigMaps and Secrets seem straightforward, but managing configuration at scale requires patterns that aren’t obvious from the basic examples.
Why Configuration Management Matters
I’ve seen production outages caused by configuration mistakes more often than code bugs. A missing environment variable, an incorrect database URL, or a malformed JSON config can bring down entire services. The challenge isn’t just storing configuration - it’s managing it safely across development, staging, and production environments.
The key insight I’ve learned: treat configuration as code. Version it, test it, and deploy it with the same rigor you apply to application code.
ConfigMaps: The Foundation
ConfigMaps store non-sensitive configuration data as key-value pairs. I use them for application settings, feature flags, and any configuration that doesn’t contain secrets.
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: production
data:
database.host: "postgres.example.com"
database.port: "5432"
log.level: "info"
feature.new_ui: "true"
The beauty of ConfigMaps is their flexibility. You can store simple key-value pairs, entire configuration files, or structured data like JSON or YAML.
Creating ConfigMaps from files saves time during development:
# Create from a properties file
kubectl create configmap app-config --from-file=application.properties
# Create from multiple files
kubectl create configmap nginx-config --from-file=nginx.conf --from-file=mime.types
I keep configuration files in my application repository and create ConfigMaps as part of the deployment process. This ensures configuration changes go through the same review process as code changes.
Secrets: Handling Sensitive Data
Secrets manage sensitive information like passwords, API keys, and certificates. They’re similar to ConfigMaps but with additional security features like base64 encoding and restricted access.
apiVersion: v1
kind: Secret
metadata:
name: database-credentials
namespace: production
type: Opaque
data:
username: cG9zdGdyZXM= # postgres (base64 encoded)
password: c3VwZXJzZWNyZXQ= # supersecret (base64 encoded)
Creating Secrets from the command line is often easier than managing base64 encoding manually:
kubectl create secret generic database-credentials \
--from-literal=username=postgres \
--from-literal=password=supersecret
For TLS certificates, Kubernetes provides a specific Secret type:
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1CRUdJTi... # base64 encoded certificate
tls.key: LS0tLS1CRUdJTi... # base64 encoded private key
Using Configuration in Pods
Configuration becomes useful when you inject it into your applications. Kubernetes provides several methods: environment variables, volume mounts, and init containers.
Environment variables work well for simple configuration:
apiVersion: v1
kind: Pod
metadata:
name: app-pod
spec:
containers:
- name: app
image: myapp:latest
env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: database.host
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: database-credentials
key: password
Volume mounts work better for configuration files:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: config-volume
configMap:
name: nginx-config
The subPath
field lets you mount individual files instead of entire directories, which prevents overwriting existing files in the container.
Environment-Specific Configuration
Managing configuration across environments is where things get complex. I’ve learned to use consistent naming patterns and separate ConfigMaps for each environment.
# Development environment
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-dev
namespace: development
data:
database.host: "postgres-dev.internal"
log.level: "debug"
feature.new_ui: "true"
---
# Production environment
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-prod
namespace: production
data:
database.host: "postgres-prod.internal"
log.level: "warn"
feature.new_ui: "false"
I use deployment templates that reference the appropriate ConfigMap for each environment. This ensures consistency while allowing environment-specific overrides.
Configuration Validation
Testing configuration before deployment prevents runtime failures. I create init containers that validate configuration and fail fast if something’s wrong:
apiVersion: v1
kind: Pod
metadata:
name: app-with-validation
spec:
initContainers:
- name: config-validator
image: busybox
command: ['sh', '-c']
args:
- |
echo "Testing configuration..."
# Test environment variables
if [ -z "$DATABASE_HOST" ]; then
echo "ERROR: DATABASE_HOST not set"
exit 1
fi
# Test configuration files
if [ ! -f /config/app.yaml ]; then
echo "ERROR: app.yaml not found"
exit 1
fi
echo "Configuration validation passed"
env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: database.host
volumeMounts:
- name: config-volume
mountPath: /config
containers:
- name: app
image: myapp:latest
# ... rest of container spec
This validation catches configuration errors before the main application starts, making debugging much easier.
Common Pitfalls
I’ve made every configuration mistake possible. Here are the ones that hurt the most:
Case sensitivity matters. Kubernetes resource names are case-sensitive, but environment variable names often aren’t. I’ve spent hours debugging why DATABASE_HOST
worked locally but database_host
failed in Kubernetes.
Namespace isolation. ConfigMaps and Secrets are namespaced resources. A ConfigMap in the development
namespace isn’t accessible from pods in the production
namespace. This seems obvious but catches everyone at least once.
Base64 encoding confusion. Secret values must be base64 encoded in YAML files, but kubectl create secret
handles encoding automatically. Mixing these approaches leads to double-encoding errors.
Immutable updates. Changing a ConfigMap doesn’t automatically restart pods that use it. You need to restart pods manually or use deployment strategies that trigger restarts when configuration changes.
Development Workflow
I’ve developed a workflow that makes configuration management less error-prone:
- Keep configuration in version control alongside application code
- Use consistent naming patterns across environments
- Validate configuration before deployment
- Test configuration changes in development first
- Monitor applications after configuration updates
This workflow has saved me from countless production issues and makes configuration changes feel as safe as code deployments.
Configuration management in Kubernetes requires discipline, but the patterns in this guide will help you avoid the mistakes that make it frustrating. The key is treating configuration with the same care you give to application code.
Next, we’ll explore advanced ConfigMap and Secret patterns that make configuration management scalable and maintainable in production environments.