Network Security & Policies
Network security in Kubernetes is fundamentally different from traditional network security. Instead of relying on IP addresses and subnets, we work with dynamic, ephemeral workloads that can scale up and down rapidly. I’ve learned that the key to effective Kubernetes network security is thinking in terms of labels and selectors rather than static network configurations.
By default, Kubernetes follows an “allow all” network model—any pod can communicate with any other pod across the entire cluster. While this makes development easier, it’s a security nightmare in production. Network policies give us the tools to implement proper network segmentation and follow the principle of least privilege at the network level.
Understanding Network Policy Fundamentals
Network policies work by selecting pods using label selectors and then defining rules for ingress (incoming) and egress (outgoing) traffic. The beauty of this approach is that it’s declarative and dynamic—as pods come and go, the network policies automatically apply to the right workloads based on their labels.
Think of network policies as firewalls that move with your applications. When you deploy a new instance of your web application, it automatically inherits the network policies that apply to pods with its labels. This dynamic behavior is what makes Kubernetes network security so powerful compared to traditional approaches.
Here’s a fundamental network policy that demonstrates the core concepts:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-app-policy
namespace: production
spec:
podSelector:
matchLabels:
app: web-app
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: load-balancer
ports:
- protocol: TCP
port: 8080
This policy selects all pods with the label app: web-app
and allows ingress traffic only from pods labeled app: load-balancer
on port 8080. Notice how we’re not dealing with IP addresses or subnets—everything is based on labels and application identity.
Implementing Micro-Segmentation
Micro-segmentation is about creating security boundaries around individual applications or services. In my experience, the most effective approach is to start with a default-deny policy and then explicitly allow the traffic you need. This might seem restrictive, but it forces you to understand and document your application’s communication patterns.
Let’s implement a comprehensive micro-segmentation strategy for a typical three-tier application:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-isolation
namespace: production
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
egress:
- to: []
ports:
- protocol: UDP
port: 53
This policy ensures that database pods can only receive connections from backend services and can only make outbound connections for DNS resolution. The empty to: []
selector in the egress rule means “allow to anywhere” but only for the specified ports.
For the backend tier, we need a policy that allows communication with both the database and frontend:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-communication
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
Cross-Namespace Communication Control
One of the most powerful aspects of network policies is controlling communication between namespaces. This is crucial for multi-tenant clusters or when you want to isolate different environments or teams. Namespace selectors allow you to create policies that span namespace boundaries while maintaining security.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: cross-namespace-api
namespace: production
spec:
podSelector:
matchLabels:
app: api-gateway
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
environment: staging
podSelector:
matchLabels:
app: test-client
ports:
- protocol: TCP
port: 443
This policy allows test clients in the staging namespace to access the API gateway in production. This pattern is particularly useful for integration testing or when you have shared services that need to be accessible across namespace boundaries.
Advanced Traffic Control Patterns
Beyond basic allow/deny rules, network policies support sophisticated traffic control patterns. One pattern I frequently use is implementing “canary” network policies for gradual rollouts. By combining network policies with deployment strategies, you can control which versions of your application can communicate with each other.
External traffic control is another critical aspect. While network policies primarily govern pod-to-pod communication, you also need to consider how external traffic reaches your cluster. This involves configuring ingress controllers, load balancers, and potentially service mesh components to work together with your network policies.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: external-access-control
namespace: production
spec:
podSelector:
matchLabels:
app: web-frontend
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
Monitoring and Troubleshooting Network Policies
Network policies can be tricky to debug when things go wrong. The key is to understand that network policies are implemented by your CNI plugin, and different plugins may have different capabilities or behaviors. Tools like kubectl describe networkpolicy
help you understand what policies are active, but they don’t show you the actual traffic flows.
I recommend implementing comprehensive logging and monitoring for network policy violations. Many CNI plugins can log denied connections, which is invaluable for troubleshooting and security monitoring. When a connection is blocked unexpectedly, these logs help you understand whether it’s due to a missing policy rule or an actual security event.
Testing network policies requires a systematic approach. I typically use simple test pods to verify connectivity between different tiers of an application. Tools like nc
(netcat) or curl
from within pods help verify that your policies are working as expected.
The network security foundation we’ve established here creates the framework for protecting communication between your workloads. In the next part, we’ll focus on pod security and workload protection, diving into security contexts, pod security standards, and runtime protection mechanisms that complement these network-level controls.