Networking and Real-World Applications
Let’s get practical. You know the basics, you understand the building blocks, now let’s build something real. I’m going to walk you through deploying a complete web application—the kind you might actually run in production.
We’ll build a typical three-tier app: a React frontend, a Node.js API, and a PostgreSQL database. Along the way, we’ll tackle the networking challenges that trip up most people when they’re starting with Kubernetes.
Understanding Kubernetes Networking (The Simple Version)
Before we dive into the deployment, let’s clear up networking. Kubernetes networking seems complicated, but the basic idea is simple:
- Every pod gets its own IP address (like having its own computer on the network)
- Pods can talk to each other directly using these IPs
- But pod IPs change when pods restart, so you use Services for stable addresses
- Services act like phone books—they keep track of which pods are healthy and route traffic to them
Think of it this way: pods are like people who might move apartments, Services are like the post office that always knows how to reach them.
Building Our Application Stack
Step 1: The Database Layer
Let’s start with PostgreSQL. In production, you’d probably use a managed database, but this shows you how persistent storage works in Kubernetes.
# postgres.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13
env:
- name: POSTGRES_DB
value: "myapp"
- name: POSTGRES_USER
value: "appuser"
- name: POSTGRES_PASSWORD
value: "secretpassword" # Don't do this in production!
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: postgres-storage
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres
ports:
- port: 5432
type: ClusterIP
The key things here: we’re using a StatefulSet because databases need stable storage, and we’re creating a persistent volume so our data survives pod restarts.
Deploy it:
kubectl apply -f postgres.yaml
kubectl get pods -w # Watch it start up
Step 2: The API Layer
Now let’s add our Node.js API. This is a stateless service, so we’ll use a Deployment:
# api.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: node:16-alpine
command: ["node", "server.js"]
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
value: "postgresql://appuser:secretpassword@postgres-service:5432/myapp"
- name: PORT
value: "3000"
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
---
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- port: 80
targetPort: 3000
type: ClusterIP
Notice how the API connects to the database using the service name postgres-service
. Kubernetes has built-in DNS that resolves service names to IP addresses.
Step 3: The Frontend
Finally, let’s add a React frontend served by nginx:
# frontend.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 2
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d
volumes:
- name: nginx-config
configMap:
name: nginx-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default.conf: |
server {
listen 80;
location / {
root /usr/share/nginx/html;
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_pass http://api-service/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
---
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
ports:
- port: 80
type: LoadBalancer
The ConfigMap contains nginx configuration that serves static files and proxies API requests to our backend service.
Making It Accessible from the Internet
Right now, our app is only accessible from inside the cluster. To expose it to the internet, we have a few options:
Option 1: LoadBalancer (Cloud Providers)
If you’re on AWS, GCP, or Azure, the LoadBalancer service type will create an actual load balancer:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
ports:
- port: 80
type: LoadBalancer
Option 2: Ingress (More Flexible)
For more control over routing, use an Ingress. First, you need an ingress controller (like nginx-ingress):
# Install nginx ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
Then create an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
Option 3: NodePort (Development)
For local development, NodePort is the simplest:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
ports:
- port: 80
nodePort: 30080
type: NodePort
Then access your app at http://localhost:30080
(Docker Desktop) or minikube service frontend-service --url
(Minikube).
Handling Configuration Properly
Hard-coding database passwords is obviously a bad idea. Let’s fix that with proper Secrets and ConfigMaps:
# secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
type: Opaque
data:
database-password: c2VjcmV0cGFzc3dvcmQ= # base64 encoded "secretpassword"
jwt-secret: bXlqd3RzZWNyZXQ= # base64 encoded "myjwtsecret"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
database-host: "postgres-service"
database-port: "5432"
database-name: "myapp"
database-user: "appuser"
api-port: "3000"
Update your API deployment to use these:
env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: database-host
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: database-password
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: app-secrets
key: jwt-secret
Adding Health Checks
Production applications need health checks so Kubernetes knows when pods are ready to receive traffic and when they need to be restarted:
spec:
containers:
- name: api
image: mycompany/api:latest
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
Your application needs to implement these endpoints:
/health
- returns 200 if the app is running (liveness)/ready
- returns 200 if the app is ready to serve traffic (readiness)
Scaling and Resource Management
As your app grows, you’ll need to scale different components independently:
# Scale the API to handle more traffic
kubectl scale deployment api --replicas=5
# Scale the frontend
kubectl scale deployment frontend --replicas=3
# Check resource usage
kubectl top pods
kubectl top nodes
Set resource requests and limits to ensure fair resource allocation:
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
Monitoring and Debugging
When things go wrong (and they will), here’s your debugging toolkit:
# Check pod status
kubectl get pods
kubectl describe pod api-xyz123
# Check logs
kubectl logs api-xyz123
kubectl logs api-xyz123 -f # Follow logs
# Check services and endpoints
kubectl get services
kubectl get endpoints api-service
# Test connectivity from inside the cluster
kubectl run debug --image=busybox -it --rm -- /bin/sh
# Inside the pod: wget -qO- http://api-service/health
Putting It All Together
Deploy the complete application:
# Deploy in order
kubectl apply -f postgres.yaml
kubectl apply -f secrets.yaml
kubectl apply -f api.yaml
kubectl apply -f frontend.yaml
# Watch everything come up
kubectl get pods -w
# Check that services are working
kubectl get services
kubectl get ingress # if using ingress
This gives you a production-ready application architecture with proper separation of concerns, configuration management, and networking. Each component can be scaled, updated, and monitored independently.
In the next part, we’ll look at more advanced patterns like persistent volumes, advanced networking with network policies, and how to handle more complex deployment scenarios.