Ingress and External Traffic Management

Exposing Kubernetes services to the internet presents an interesting challenge. You could create a LoadBalancer service for each application, but that quickly becomes expensive and unwieldy—imagine managing dozens of load balancers, each with its own IP address and SSL certificate. There has to be a better way.

Ingress controllers solve this problem elegantly by providing HTTP routing, SSL termination, and load balancing all in one place. They act as a single entry point for external traffic, then route requests to the appropriate services based on hostnames, paths, and other HTTP attributes. Like most Kubernetes concepts, Ingress seems simple until you need it to work in production.

Understanding Ingress Controllers

An Ingress controller is essentially a reverse proxy that runs inside your cluster and routes external HTTP/HTTPS traffic to your services. The key insight is that Ingress is just configuration—you need an Ingress controller to actually implement that configuration.

Popular Ingress controllers include:

  • NGINX Ingress Controller: Most common, reliable, lots of features
  • Traefik: Great for microservices, automatic service discovery
  • HAProxy Ingress: High performance, enterprise features
  • Istio Gateway: Part of the Istio service mesh
  • Cloud provider controllers: ALB (AWS), GCE (Google), etc.

The choice matters because different controllers have different capabilities and configuration options.

Setting Up NGINX Ingress Controller

Let’s start with the most popular option. Installing NGINX Ingress Controller is straightforward:

# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml

# Wait for it to be ready
kubectl wait --namespace ingress-nginx \
  --for=condition=ready pod \
  --selector=app.kubernetes.io/component=controller \
  --timeout=120s

This creates a LoadBalancer service that receives all external traffic and routes it based on your Ingress rules.

Basic HTTP Routing

The simplest Ingress rule routes all traffic to a single service:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: simple-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

This routes all traffic for myapp.example.com to the web-service. The pathType: Prefix means any path starting with / (so, everything) gets routed to this service.

Path-Based Routing

More commonly, you’ll want to route different paths to different services:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: path-based-ingress
spec:
  ingressClassName: nginx
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /users
        pathType: Prefix
        backend:
          service:
            name: user-service
            port:
              number: 80
      - path: /orders
        pathType: Prefix
        backend:
          service:
            name: order-service
            port:
              number: 80
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend-service
            port:
              number: 80

Order matters here. More specific paths should come before general ones. The / path acts as a catch-all for anything that doesn’t match the other paths.

SSL/TLS Termination

SSL termination is where Ingress controllers really shine. Instead of managing certificates in each service, you handle them centrally:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tls-ingress
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - secure.example.com
    secretName: secure-example-tls
  rules:
  - host: secure.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

The cert-manager.io/cluster-issuer annotation tells cert-manager to automatically obtain and renew SSL certificates from Let’s Encrypt. The certificate gets stored in the secure-example-tls secret.

Advanced Routing with Annotations

NGINX Ingress Controller supports extensive customization through annotations. Here are some patterns I use regularly:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: advanced-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/rate-limit: "100"
    nginx.ingress.kubernetes.io/rate-limit-window: "1m"
    nginx.ingress.kubernetes.io/cors-allow-origin: "https://myapp.com"
    nginx.ingress.kubernetes.io/auth-basic: "Authentication Required"
    nginx.ingress.kubernetes.io/auth-basic-realm: "Please enter your credentials"
spec:
  ingressClassName: nginx
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /api/v1(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: api-v1-service
            port:
              number: 80

The rewrite-target annotation strips /api/v1 from the path before sending it to the backend service. Rate limiting prevents abuse, CORS headers enable cross-origin requests, and basic auth adds simple authentication.

Multiple Ingress Controllers

In larger environments, you might run multiple Ingress controllers for different purposes:

# Internal-only ingress for admin interfaces
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: admin-ingress
  annotations:
    nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/8,192.168.0.0/16"
spec:
  ingressClassName: nginx-internal
  rules:
  - host: admin.internal.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: admin-service
            port:
              number: 80

This uses a separate nginx-internal Ingress class that might be configured to only accept traffic from internal networks.

Load Balancing and Session Affinity

By default, Ingress controllers load balance requests across backend pods. Sometimes you need session affinity:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sticky-ingress
  annotations:
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/affinity-mode: "persistent"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
    nginx.ingress.kubernetes.io/session-cookie-expires: "86400"
spec:
  ingressClassName: nginx
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: stateful-app-service
            port:
              number: 80

This creates a cookie-based session affinity that lasts 24 hours, ensuring users stick to the same backend pod.

Handling WebSockets and Streaming

WebSockets and long-lived connections need special handling:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: websocket-ingress
  annotations:
    nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
    nginx.ingress.kubernetes.io/server-snippets: |
      location /ws {
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
      }
spec:
  ingressClassName: nginx
  rules:
  - host: chat.example.com
    http:
      paths:
      - path: /ws
        pathType: Prefix
        backend:
          service:
            name: websocket-service
            port:
              number: 8080
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

The extended timeouts and upgrade headers ensure WebSocket connections work properly.

Cloud Provider Integration

Cloud providers offer their own Ingress controllers that integrate with their load balancers:

# AWS ALB Ingress example
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: aws-alb-ingress
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:123456789:certificate/abc123
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

This creates an AWS Application Load Balancer with SSL termination using an ACM certificate.

Monitoring and Observability

Ingress controllers provide valuable metrics about your external traffic:

# Check NGINX Ingress Controller metrics
kubectl get --raw /metrics | grep nginx

# View controller logs
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller

# Check ingress status
kubectl describe ingress my-ingress

Most Ingress controllers expose Prometheus metrics that you can scrape for monitoring dashboards.

Troubleshooting Common Issues

When Ingress isn’t working, check these common issues:

  1. DNS not pointing to load balancer: Verify your domain points to the Ingress controller’s external IP
  2. Wrong Ingress class: Make sure ingressClassName matches your controller
  3. Service doesn’t exist: Check that the backend service and endpoints exist
  4. Path matching issues: Test with curl and check controller logs
  5. SSL certificate problems: Verify cert-manager is working and certificates are valid

Security Considerations

Ingress controllers are your cluster’s front door, so security is crucial:

  • Always use HTTPS in production
  • Implement rate limiting to prevent abuse
  • Use Web Application Firewall (WAF) rules
  • Regularly update your Ingress controller
  • Monitor for suspicious traffic patterns
  • Implement proper authentication and authorization

What’s Coming Next

Ingress gets traffic into your cluster, but what about controlling traffic between services inside your cluster? In the next part, we’ll explore network policies—Kubernetes’ built-in firewall that lets you implement micro-segmentation and zero-trust networking.

The combination of Ingress for external traffic and network policies for internal traffic gives you complete control over how data flows through your applications.