Introduction and Setup

Kubernetes networking has a reputation for being complex, and honestly, that reputation is well-deserved. The challenge isn’t that the concepts are inherently difficult—it’s that they’re completely different from traditional networking. If you’re coming from a world of VLANs, subnets, and static IP addresses, Kubernetes networking requires a fundamental shift in thinking.

The good news is that once you understand the core principles, Kubernetes networking is actually quite elegant. It’s dynamic, software-defined, and surprisingly simple—once you stop fighting it and start working with its design philosophy.

The Kubernetes Networking Model

Traditional networking thinks in terms of physical locations and fixed addresses. Kubernetes thinks in terms of labels, services, and policies. This shift in mindset is crucial because it affects every networking decision you’ll make.

Every pod gets its own IP address, but here’s the catch—you should never care what that IP is. Pods are ephemeral; they come and go with different IPs every time. The moment you start hardcoding pod IPs, you’ve missed the point entirely.

Instead, Kubernetes uses Services as stable network endpoints. Think of Services as phone numbers that always reach the right person, even if that person moves apartments. The Service handles the routing; you just dial the number.

# A simple service that demonstrates the concept
apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080

This Service creates a stable endpoint called web-service that routes traffic to any pod labeled app: web. The pods can restart, move to different nodes, or scale up and down—the Service endpoint remains constant.

Understanding Pod-to-Pod Communication

The first networking principle in Kubernetes is that any pod can talk to any other pod without NAT (Network Address Translation). This sounds scary from a security perspective, and it should—by default, your cluster is one big flat network.

When I first learned this, I panicked. “You mean my database pod can be reached from anywhere in the cluster?” Yes, exactly. This is why network policies exist, but we’ll get to those later.

This flat network model makes development easier but requires you to think about security from day one. The good news is that Kubernetes gives you the tools to lock things down properly.

DNS and Service Discovery

Kubernetes runs its own DNS server inside the cluster, and it’s one of the most elegant parts of the system. Every Service automatically gets a DNS name following a predictable pattern: service-name.namespace.svc.cluster.local.

In practice, you rarely need the full DNS name. If you’re in the same namespace, just use the service name:

# Your app can connect to the database like this
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  template:
    spec:
      containers:
      - name: app
        image: myapp:latest
        env:
        - name: DATABASE_URL
          value: "postgres://user:pass@postgres-service:5432/mydb"

Notice how we’re using postgres-service as the hostname. Kubernetes DNS resolves this to the actual pod IPs behind the service. It’s like having a phone book that updates itself automatically.

The Container Network Interface (CNI)

Here’s where things get interesting. Kubernetes doesn’t actually implement networking—it delegates that to CNI plugins. Different CNI plugins have different capabilities, and choosing the right one affects what networking features you can use.

Popular CNI plugins include:

  • Flannel: Simple and reliable, good for basic setups
  • Calico: Advanced features like network policies and BGP routing
  • Cilium: eBPF-based with advanced security and observability
  • Weave: Easy setup with built-in encryption

The CNI plugin you choose determines whether you can use network policies, what kind of load balancing you get, and how traffic flows between nodes. Most managed Kubernetes services (EKS, GKE, AKS) choose this for you, but it’s worth understanding the implications.

Service Types and External Access

Services come in different types, each solving different networking challenges:

ClusterIP is the default—it creates an internal-only endpoint. Perfect for backend services that don’t need external access.

NodePort opens a port on every node in your cluster. It’s simple but not very elegant for production use:

apiVersion: v1
kind: Service
metadata:
  name: web-nodeport
spec:
  type: NodePort
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080
    nodePort: 30080

LoadBalancer is what you want for production external access. If you’re on a cloud provider, this creates an actual load balancer:

apiVersion: v1
kind: Service
metadata:
  name: web-loadbalancer
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 8080

Setting Up Your Networking Environment

For this guide, you’ll need a Kubernetes cluster with a CNI that supports network policies. If you’re using a managed service:

  • EKS: Use the AWS VPC CNI with Calico for network policies
  • GKE: Enable network policy support when creating the cluster
  • AKS: Use Azure CNI with network policies enabled

For local development, I recommend using kind (Kubernetes in Docker) with Calico:

# Create a kind cluster with Calico
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  disableDefaultCNI: true
  podSubnet: "10.244.0.0/16"
nodes:
- role: control-plane
- role: worker
- role: worker
EOF

# Install Calico
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml

Common Networking Gotchas

Let me save you some debugging time with the issues I see most often:

DNS resolution fails: Usually means CoreDNS isn’t running properly. Check kubectl get pods -n kube-system and look for coredns pods.

Services can’t reach pods: Check that your service selector matches your pod labels exactly. Case matters, and typos are common.

External traffic can’t reach services: Make sure you’re using the right service type and that your cloud provider supports LoadBalancer services.

Pods can’t reach external services: Could be DNS configuration or network policies blocking egress traffic.

Testing Your Network Setup

Before diving into complex networking scenarios, let’s verify everything works:

# Create a test pod for network debugging
kubectl run netshoot --image=nicolaka/netshoot -it --rm -- /bin/bash

# Inside the pod, test DNS resolution
nslookup kubernetes.default.svc.cluster.local

# Test connectivity to a service
curl http://web-service

# Check what DNS servers are configured
cat /etc/resolv.conf

The netshoot image is invaluable for network debugging—it includes tools like curl, dig, nslookup, and tcpdump.

What’s Coming Next

Understanding these networking fundamentals sets you up for the more advanced topics we’ll cover:

  • Service discovery patterns and DNS configuration
  • Ingress controllers and HTTP routing
  • Network policies for security and segmentation
  • Advanced networking with service mesh
  • Troubleshooting network issues in production

The key insight to remember: Kubernetes networking is about abstractions, not infrastructure. Stop thinking about IP addresses and start thinking about services, labels, and policies. Once that clicks, everything else becomes much clearer.

In the next part, we’ll dive deep into service discovery and DNS, exploring how applications find and communicate with each other in a dynamic container environment.