Introduction to Kubernetes
Container orchestration sounds complicated, but the problem it solves is simple: how do you run dozens or hundreds of containers across multiple servers without losing your sanity? Docker works great for single containers, but when you need to manage entire applications with databases, web servers, and background workers, you quickly realize you need something more sophisticated.
Kubernetes (or K8s if you’re feeling fancy) is basically your operations team in software form. Google built it after running containers at massive scale for years, and they open-sourced it because, frankly, everyone was going to need this eventually.
What Problem Does Kubernetes Actually Solve?
Here’s the thing—containers are great until you have more than a few of them. Then you’re stuck with questions like: Which server should this container run on? What happens when a server dies? How do I update my app without downtime? Kubernetes answers all of these automatically.
Think about it this way: without Kubernetes, you’re manually placing containers on servers like you’re playing Tetris. With Kubernetes, you just tell it what you want running, and it figures out the rest. Need five copies of your web app? Done. One of them crashes? Kubernetes starts a new one before you even notice.
# You write this simple config
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-web-app
spec:
replicas: 5
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx
And Kubernetes handles all the complexity behind the scenes—where to run these containers, what to do when they fail, how to route traffic to them. It’s like having a really smart operations person who never sleeps.
How Kubernetes Actually Works
Let’s break down what’s happening under the hood, but without the enterprise architecture diagrams that make your eyes glaze over.
The Control Plane (The Brain)
The control plane is where all the decision-making happens. It’s usually running on separate servers (called master nodes) and consists of a few key components:
API Server - This is your main interface to Kubernetes. Every kubectl command, every dashboard click, every automated deployment goes through here. Think of it as the receptionist who knows everything about your cluster.
etcd - The cluster’s memory. It’s a database that stores the current state of everything—which pods are running where, what your configurations look like, etc. If etcd goes down, Kubernetes gets amnesia.
Scheduler - The matchmaker. When you want to run a new pod, the scheduler looks at all your servers and decides which one should run it based on resources, constraints, and a bunch of other factors.
Controller Manager - The enforcer. It constantly watches the actual state of your cluster and compares it to what you said you wanted. If something’s off, it fixes it.
The Worker Nodes (The Muscle)
These are the servers where your actual applications run. Each worker node has:
kubelet - The local agent that talks to the control plane and manages containers on this specific node. It’s like a site manager who takes orders from headquarters.
kube-proxy - Handles networking so your pods can talk to each other and the outside world.
Container Runtime - Docker, containerd, or whatever actually runs your containers.
The Building Blocks You Need to Know
Pods - The Basic Unit
A pod is the smallest thing you can deploy in Kubernetes. Most of the time, it’s just one container, but sometimes you might have a few containers that need to work closely together (like a web server and a logging sidecar).
Here’s the key thing about pods: they’re ephemeral. They come and go. Don’t get attached to them. If a pod dies, Kubernetes will start a new one with a different IP address and possibly on a different server.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: web
image: nginx
ports:
- containerPort: 80
Deployments - The Reliable Way to Run Things
You almost never create pods directly. Instead, you create a Deployment, which manages pods for you. Want three copies of your app? A Deployment will make sure you always have three running, even if servers crash.
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx
Services - How Things Talk to Each Other
Since pods come and go with different IP addresses, you need a stable way for them to find each other. That’s what Services do—they provide a consistent endpoint that routes traffic to healthy pods.
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
- port: 80
targetPort: 80
type: LoadBalancer
Getting Kubernetes Running Locally
Alright, enough theory. Let’s get you a Kubernetes cluster to play with. You’ve got a few options, and honestly, they all work fine—it’s more about what you’re comfortable with.
Option 1: Minikube (My Personal Favorite for Learning)
Minikube is probably the easiest way to get started. It creates a single-node cluster on your laptop, which is perfect for learning and testing.
# On macOS
brew install minikube
# On Linux
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# Start your cluster
minikube start
# Check if it's working
kubectl get nodes
The nice thing about Minikube is it comes with a web dashboard that makes it easy to see what’s happening:
minikube dashboard
Option 2: Docker Desktop (If You’re Already Using It)
If you’ve got Docker Desktop installed, you can just enable Kubernetes in the settings. It’s dead simple—just check a box and restart Docker Desktop. Then you can verify it’s working:
kubectl cluster-info
kubectl get nodes
Option 3: Kind (Kubernetes in Docker)
Kind is great if you want to test multi-node clusters or if you’re doing CI/CD stuff. It runs Kubernetes inside Docker containers.
# Install it
brew install kind # macOS
# Create a cluster
kind create cluster --name learning-cluster
# Use it
kubectl cluster-info --context kind-learning-cluster
Installing kubectl (Your New Best Friend)
kubectl (pronounced “kube-control” or “kube-cuttle”—I’ve heard both) is how you talk to Kubernetes. Think of it as your remote control for the cluster.
# On macOS (easiest way)
brew install kubectl
# On Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Check if it's working
kubectl version --client
Let’s Actually Use This Thing
Now that you’ve got Kubernetes running, let’s do something with it. I’m going to walk you through the commands I use every day.
The Commands You’ll Use Most
# See what's in your cluster
kubectl get nodes
kubectl get pods
kubectl get services
# Get more details about something
kubectl describe pod some-pod-name
# See what's happening (this one's a lifesaver)
kubectl get events --sort-by=.metadata.creationTimestamp
# Follow logs in real-time
kubectl logs -f pod-name
The get
command is probably what you’ll use most. It shows you what’s running, and you can add -o wide
to get more details or -w
to watch things change in real-time.
Your First Real Application
Let’s skip the “hello world” stuff and build something you might actually deploy. We’ll create a simple web app with a database—nothing fancy, but it’ll show you how the pieces fit together.
Step 1: Deploy a Database
First, let’s get MySQL running. In the real world, you’d probably use a managed database, but this is good for learning:
# Save this as mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
env:
- name: MYSQL_ROOT_PASSWORD
value: "password123"
- name: MYSQL_DATABASE
value: "myapp"
ports:
- containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
selector:
app: mysql
ports:
- port: 3306
type: ClusterIP
Deploy it:
kubectl apply -f mysql.yaml
kubectl get pods -w # Watch it start up
Step 2: Deploy the Web App
Now let’s add a WordPress frontend that connects to our MySQL database:
# Save this as wordpress.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
spec:
replicas: 2
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:latest
env:
- name: WORDPRESS_DB_HOST
value: "mysql-service:3306"
- name: WORDPRESS_DB_NAME
value: "myapp"
- name: WORDPRESS_DB_PASSWORD
value: "password123"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: wordpress-service
spec:
selector:
app: wordpress
ports:
- port: 80
nodePort: 30080
type: NodePort
Deploy and access it:
kubectl apply -f wordpress.yaml
# If you're using Minikube
minikube service wordpress-service --url
# If you're using Docker Desktop, just go to http://localhost:30080
What Just Happened?
You just deployed a two-tier application! Here’s what’s cool about this:
- The database and web app are separate deployments - they can scale independently
- The Service provides stable networking - even if MySQL pods restart, WordPress can still find them at
mysql-service:3306
- Everything is declarative - you described what you wanted, and Kubernetes made it happen
When Things Go Wrong (And They Will)
Let me save you some time with the most common issues you’ll hit:
Pod Won’t Start
# This is your debugging best friend
kubectl describe pod pod-name
# Check the logs
kubectl logs pod-name
# If it's restarting, check previous logs
kubectl logs pod-name --previous
Nine times out of ten, it’s either a wrong image name, missing environment variables, or resource constraints.
Can’t Access Your Service
# Check if the service is finding your pods
kubectl get endpoints service-name
# Make sure your labels match
kubectl get pods --show-labels
Usually, this is a label selector mismatch. Your service is looking for app: web
but your pods are labeled app: webapp
.
Cluster Acting Weird
# Check if all the system components are happy
kubectl get pods -n kube-system
# See what's happening
kubectl get events --sort-by=.metadata.creationTimestamp
A Few Things That’ll Save You Headaches
Always Use Labels
Labels are how everything finds everything else in Kubernetes. Be consistent:
metadata:
labels:
app: my-app
version: v1.0
environment: production
Set Resource Limits
If you don’t set limits, one misbehaving pod can take down your whole node:
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
Use Namespaces to Stay Organized
Don’t dump everything in the default namespace. Create separate spaces for different environments:
kubectl create namespace development
kubectl create namespace staging
kubectl create namespace production
What’s Next?
You now know enough to be dangerous with Kubernetes! You understand the basic building blocks (pods, services, deployments), you can deploy applications, and you know how to troubleshoot when things go sideways.
In the next part, we’ll dive deeper into workloads—different types of controllers, how to handle stateful applications, and more advanced deployment patterns. But honestly, what you’ve learned here will get you pretty far already.