Workloads and Controllers
Now that you’ve got the basics down, let’s talk about the different ways to run your applications in Kubernetes. You’ve already seen Deployments, but there are several other controllers, each designed for specific use cases. Think of them as different tools in your toolbox—you wouldn’t use a hammer for everything, right?
The beauty of Kubernetes controllers is that they’re constantly watching and adjusting. You tell them what you want, and they make sure it stays that way. Server crashes? Controller starts a new one. Need to scale up? Controller handles it. It’s like having a really attentive assistant who never takes a break.
Deployments - Your Go-To for Most Apps
Deployments are what you’ll use 90% of the time. They’re perfect for stateless applications—web servers, APIs, microservices, that sort of thing. The key word here is “stateless,” meaning your app doesn’t care which specific server it’s running on or store important data locally.
Here’s a more realistic deployment than the basic nginx example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server
spec:
replicas: 3
selector:
matchLabels:
app: api-server
template:
metadata:
labels:
app: api-server
spec:
containers:
- name: api
image: mycompany/api:v1.2.3
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
value: "postgres://db-service:5432/myapp"
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
What’s happening here? We’re asking for three copies of our API server, each with specific resource limits. If one crashes, Kubernetes immediately starts a replacement. If the whole node dies, Kubernetes moves the pods to healthy nodes.
Rolling Updates - The Magic of Zero Downtime
The coolest thing about Deployments is how they handle updates. Let’s say you want to deploy version 1.2.4 of your API:
kubectl set image deployment/api-server api=mycompany/api:v1.2.4
Kubernetes doesn’t just kill all your old pods and start new ones (that would cause downtime). Instead, it gradually replaces them—starting new pods with the new version, waiting for them to be ready, then terminating old ones. Your users never notice a thing.
You can watch this happen in real-time:
kubectl rollout status deployment/api-server
kubectl get pods -w # Watch pods change
And if something goes wrong with the new version? Easy rollback:
kubectl rollout undo deployment/api-server
ReplicaSets - The Behind-the-Scenes Worker
You’ll rarely create ReplicaSets directly, but it’s worth understanding them because Deployments use them under the hood. A ReplicaSet ensures a specific number of pod replicas are running at any given time. Think of it as the middle manager between your Deployment and your pods.
When you create a Deployment, it creates a ReplicaSet, which creates the pods. When you update a Deployment, it creates a new ReplicaSet with the new configuration while gradually scaling down the old one.
# See the ReplicaSets created by your Deployment
kubectl get replicasets
kubectl describe replicaset api-server-abc123
DaemonSets - One Pod Per Node
Sometimes you need exactly one pod running on every node in your cluster. Log collectors, monitoring agents, network plugins—these are perfect for DaemonSets. As you add nodes to your cluster, DaemonSets automatically deploy pods to them.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-collector
spec:
selector:
matchLabels:
app: log-collector
template:
metadata:
labels:
app: log-collector
spec:
containers:
- name: fluentd
image: fluentd:latest
volumeMounts:
- name: varlog
mountPath: /var/log
- name: dockerlogs
mountPath: /var/lib/docker/containers
volumes:
- name: varlog
hostPath:
path: /var/log
- name: dockerlogs
hostPath:
path: /var/lib/docker/containers
This DaemonSet runs a log collector on every node, mounting the host’s log directories so it can collect logs from all containers.
StatefulSets - For Apps That Care About Identity
Most web applications are stateless—they don’t care which server they’re on or what their hostname is. But some applications do care. Databases, for example, often need stable network identities and persistent storage. That’s where StatefulSets come in.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres-headless
replicas: 3
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13
env:
- name: POSTGRES_PASSWORD
value: "secretpassword"
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: postgres-storage
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
StatefulSets give each pod a stable name (postgres-0, postgres-1, postgres-2) and ensure they start up in order. Each pod gets its own persistent volume that survives pod restarts and rescheduling.
Jobs and CronJobs - One-Time and Scheduled Tasks
Not everything needs to run forever. Sometimes you just need to run a task once, or on a schedule. That’s what Jobs and CronJobs are for.
Jobs - Run Once and Exit
apiVersion: batch/v1
kind: Job
metadata:
name: database-migration
spec:
template:
spec:
containers:
- name: migrate
image: mycompany/migrator:latest
command: ["python", "migrate.py"]
env:
- name: DATABASE_URL
value: "postgres://db-service:5432/myapp"
restartPolicy: Never
backoffLimit: 3
This Job runs a database migration script. If it fails, Kubernetes will retry up to 3 times. Once it succeeds, the Job is complete.
CronJobs - Scheduled Tasks
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup-job
spec:
schedule: "0 2 * * *" # Every day at 2 AM
jobTemplate:
spec:
template:
spec:
containers:
- name: backup
image: mycompany/backup:latest
command: ["./backup.sh"]
restartPolicy: OnFailure
This CronJob runs a backup script every night at 2 AM. The schedule format is the same as regular cron.
Services - Making Your Apps Accessible
You’ve seen Services before, but let’s dive deeper. Services solve the fundamental problem of networking in Kubernetes: pods come and go with different IP addresses, but your applications need stable endpoints to communicate.
ClusterIP - Internal Communication
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api-server
ports:
- port: 80
targetPort: 8080
type: ClusterIP
This is the default service type. It creates a stable IP address that other pods in the cluster can use to reach your API servers. The service automatically load-balances between all healthy pods.
NodePort - External Access (Development)
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web-app
ports:
- port: 80
targetPort: 8080
nodePort: 30080
type: NodePort
NodePort opens a specific port on every node in your cluster. It’s great for development, but not ideal for production because you have to manage port numbers and node IPs.
LoadBalancer - External Access (Production)
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web-app
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
If you’re running on a cloud provider (AWS, GCP, Azure), this creates an actual load balancer that routes traffic to your pods. It’s the cleanest way to expose services to the internet.
ConfigMaps and Secrets - Configuration Management
Hard-coding configuration in your containers is a bad idea. What if you need different database URLs for development and production? ConfigMaps and Secrets let you inject configuration at runtime.
ConfigMaps - Non-Sensitive Configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database_host: "postgres-service"
database_port: "5432"
log_level: "info"
feature_flags: |
{
"new_ui": true,
"beta_feature": false
}
Use it in your deployment:
spec:
containers:
- name: app
image: myapp:latest
env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: database_host
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: app-config
key: log_level
Secrets - Sensitive Data
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
database_password: cGFzc3dvcmQxMjM= # base64 encoded
api_key: YWJjZGVmZ2hpams=
Use secrets the same way as ConfigMaps, but they’re stored more securely and not displayed in plain text when you run kubectl get
.
Putting It All Together
Here’s how these pieces typically work together in a real application:
- Deployment manages your application pods
- Service provides stable networking for the deployment
- ConfigMap holds non-sensitive configuration
- Secret holds passwords and API keys
- Job runs database migrations during deployment
- CronJob handles periodic maintenance tasks
The beauty is that each piece has a single responsibility, but they work together seamlessly. You can update configuration without redeploying your app, scale your deployment independently of your database, and run maintenance tasks without affecting your main application.
In the next part, we’ll look at practical examples of deploying real applications using these building blocks, including how to handle persistent data and more complex networking scenarios.