Core Concepts and Fundamentals
After managing configuration for dozens of Kubernetes applications, I’ve learned that the basic ConfigMap and Secret examples only get you so far. Real applications need structured configuration, templating, and dynamic updates. The patterns I’ll share here come from years of debugging configuration issues at 3 AM.
The biggest lesson I’ve learned: configuration complexity grows exponentially with the number of services and environments. What works for a single application breaks down when you’re managing configuration for 50 microservices across multiple environments.
Structured Configuration Patterns
Simple key-value pairs work for basic settings, but complex applications need hierarchical configuration. I structure ConfigMaps to mirror how applications actually consume configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-structured
namespace: production
data:
database.yaml: |
primary:
host: postgres-primary.example.com
port: 5432
pool_size: 20
timeout: 30s
replica:
host: postgres-replica.example.com
port: 5432
pool_size: 10
timeout: 30s
logging.yaml: |
level: info
format: json
outputs:
- console
- file:/var/log/app.log
features.yaml: |
new_ui: true
beta_features: false
rate_limiting: 1000
This approach lets me manage related configuration together while keeping it organized. Applications can mount these as files and parse them with their preferred configuration libraries.
Configuration Templating
Hard-coding values in ConfigMaps becomes unmaintainable across environments. I use templating to generate environment-specific configuration from shared templates.
Here’s a template I use for database configuration:
# config-template.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-{{.Environment}}
namespace: {{.Namespace}}
data:
database.yaml: |
host: postgres-{{.Environment}}.internal
port: 5432
database: myapp_{{.Environment}}
ssl: {{.DatabaseSSL}}
pool_size: {{.DatabasePoolSize}}
app.yaml: |
environment: {{.Environment}}
debug: {{.DebugMode}}
log_level: {{.LogLevel}}
I process this template with different values for each environment:
# Development values
export Environment=dev
export Namespace=development
export DatabaseSSL=false
export DatabasePoolSize=5
export DebugMode=true
export LogLevel=debug
# Generate development ConfigMap
envsubst < config-template.yaml > config-dev.yaml
This templating approach eliminates configuration drift between environments while allowing necessary differences.
Advanced Secret Management
Basic Secrets work for simple use cases, but production applications need more sophisticated secret management. I’ve learned to integrate external secret management systems with Kubernetes.
External Secrets Operator integration:
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: vault-backend
namespace: production
spec:
provider:
vault:
server: "https://vault.company.com"
path: "secret"
version: "v2"
auth:
kubernetes:
mountPath: "kubernetes"
role: "myapp-role"
---
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: database-credentials
namespace: production
spec:
refreshInterval: 15s
secretStoreRef:
name: vault-backend
kind: SecretStore
target:
name: database-secret
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: database/production
property: username
- secretKey: password
remoteRef:
key: database/production
property: password
This setup automatically syncs secrets from Vault to Kubernetes, eliminating the need to manually manage secret values in cluster.
Dynamic Configuration Updates
One of the most frustrating aspects of Kubernetes configuration is that changing a ConfigMap doesn’t automatically update running pods. I’ve developed patterns to handle dynamic updates gracefully.
For applications that can reload configuration, I use a sidecar container that watches for changes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-with-config-reload
spec:
template:
spec:
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: config
mountPath: /etc/config
- name: config-reloader
image: jimmidyson/configmap-reload:latest
args:
- --volume-dir=/etc/config
- --webhook-url=http://localhost:8080/reload
volumeMounts:
- name: config
mountPath: /etc/config
readOnly: true
volumes:
- name: config
configMap:
name: app-config
The config-reloader sidecar watches for file changes and triggers a reload webhook in the main application. This pattern works well for applications that support graceful configuration reloading.
For applications that can’t reload configuration, I use deployment annotations to force pod restarts when configuration changes:
# Update ConfigMap and trigger deployment restart
kubectl patch configmap app-config --patch '{"data":{"new.setting":"value"}}'
kubectl patch deployment myapp -p \
'{"spec":{"template":{"metadata":{"annotations":{"configHash":"'$(date +%s)'"}}}}}'
Configuration Validation Patterns
I’ve learned to validate configuration at multiple levels to catch errors early. Here’s a comprehensive validation approach I use:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-validator
data:
validate.sh: |
#!/bin/bash
set -e
echo "Validating configuration..."
# Validate required environment variables
required_vars=("DATABASE_HOST" "DATABASE_PORT" "API_KEY")
for var in "${required_vars[@]}"; do
if [ -z "${!var}" ]; then
echo "ERROR: Required variable $var is not set"
exit 1
fi
done
# Validate configuration file syntax
if [ -f /config/app.yaml ]; then
python -c "import yaml; yaml.safe_load(open('/config/app.yaml'))" || {
echo "ERROR: Invalid YAML in app.yaml"
exit 1
}
fi
# Validate database connectivity
if command -v nc >/dev/null; then
nc -z "$DATABASE_HOST" "$DATABASE_PORT" || {
echo "ERROR: Cannot connect to database"
exit 1
}
fi
echo "Configuration validation passed"
I run this validation in init containers to ensure configuration is correct before starting the main application.
Multi-Environment Configuration Strategy
Managing configuration across development, staging, and production environments requires a systematic approach. I use a layered configuration strategy:
Base configuration (shared across all environments):
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-base
data:
app.yaml: |
service_name: myapp
port: 8080
metrics_port: 9090
health_check_path: /health
Environment-specific overlays:
# Development overlay
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-dev
data:
app.yaml: |
debug: true
log_level: debug
database_pool_size: 5
---
# Production overlay
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-prod
data:
app.yaml: |
debug: false
log_level: warn
database_pool_size: 20
Applications merge base configuration with environment-specific overrides at startup. This approach ensures consistency while allowing necessary environment differences.
Configuration Security Patterns
Security considerations become critical when managing configuration at scale. I follow these patterns to keep configuration secure:
Principle of least privilege: Each application gets only the configuration it needs:
apiVersion: v1
kind: ServiceAccount
metadata:
name: myapp-sa
namespace: production
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myapp-config-reader
namespace: production
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["app-config", "app-config-prod"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["database-credentials"]
verbs: ["get"]
Configuration encryption: Sensitive configuration gets encrypted at rest:
apiVersion: v1
kind: Secret
metadata:
name: encrypted-config
annotations:
config.kubernetes.io/local-config: "true"
type: Opaque
data:
config.yaml: <encrypted-data>
Audit logging: I enable audit logging for configuration changes to track who changed what and when.
Troubleshooting Configuration Issues
Configuration problems can be subtle and hard to debug. I’ve developed a systematic approach to troubleshooting:
Check configuration mounting:
# Verify ConfigMap exists and has expected data
kubectl get configmap app-config -o yaml
# Check if configuration is properly mounted in pod
kubectl exec -it pod-name -- ls -la /etc/config
kubectl exec -it pod-name -- cat /etc/config/app.yaml
Validate environment variables:
# Check environment variables in running pod
kubectl exec -it pod-name -- env | grep DATABASE
Monitor configuration changes:
# Watch for ConfigMap changes
kubectl get events --field-selector involvedObject.kind=ConfigMap
# Check pod restart history
kubectl describe pod pod-name | grep -A 10 Events
These debugging techniques have saved me countless hours when configuration issues arise in production.
The patterns in this section form the foundation for scalable configuration management. They’ve evolved from real-world experience managing configuration across hundreds of applications and multiple environments.
Next, we’ll explore practical applications of these patterns with real-world examples and complete configuration setups for common application architectures.