Network Policies and Security
Kubernetes clusters are surprisingly permissive by default. Any pod can communicate with any other pod, which means your frontend application can directly access your production database if it wants to. This flat network model makes development easier, but it’s a security nightmare that needs to be addressed before you go to production.
Network policies are Kubernetes’ answer to network segmentation. They’re like firewalls, but instead of working with IP addresses and ports, they work with labels and selectors. The challenge is that they require a different way of thinking about network security—one that embraces the dynamic, label-driven nature of Kubernetes.
Understanding Network Policy Fundamentals
Network policies are deny-by-default when they exist. This is crucial to understand: if no network policy selects a pod, that pod can communicate freely. But as soon as any network policy selects a pod, that pod can only communicate according to the rules in those policies.
This means you can’t just create a single “allow everything” policy and call it secure. You need to think through your application’s communication patterns and create policies that allow necessary traffic while blocking everything else.
# This policy blocks ALL traffic to selected pods
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {} # Selects all pods in the namespace
policyTypes:
- Ingress
- Egress
This policy selects all pods in the production
namespace and blocks all ingress and egress traffic. It’s the foundation of a zero-trust network model.
Implementing Micro-Segmentation
The most effective approach I’ve found is to start with a default-deny policy and then explicitly allow the traffic you need. Let’s build a realistic example with a three-tier application:
# Allow frontend to communicate with backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
This policy allows pods labeled tier: frontend
to connect to pods labeled tier: backend
on port 8080. Notice that we’re selecting the destination pods (backend) and defining who can reach them (frontend).
For the database tier, we want even stricter controls:
# Only backend can access database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-backend-to-database
namespace: production
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
egress:
- to: [] # Allow DNS resolution
ports:
- protocol: UDP
port: 53
The database policy is more restrictive—it only allows ingress from backend pods and only allows egress for DNS resolution.
Cross-Namespace Communication Control
In multi-tenant environments, you often need to control communication between namespaces. Network policies support namespace selectors for this:
# Allow access from monitoring namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring-access
namespace: production
spec:
podSelector:
matchLabels:
app: web-server
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: monitoring
podSelector:
matchLabels:
app: prometheus
ports:
- protocol: TCP
port: 9090
This allows Prometheus pods in the monitoring
namespace to scrape metrics from web servers in the production
namespace.
Egress Control and External Services
Controlling outbound traffic is just as important as inbound traffic. Many attacks involve compromised pods making outbound connections to download malware or exfiltrate data:
# Restrict egress to specific external services
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-egress-policy
namespace: production
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Egress
egress:
- to: [] # DNS resolution
ports:
- protocol: UDP
port: 53
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
- to: [] # HTTPS to external APIs
ports:
- protocol: TCP
port: 443
This policy allows backend pods to resolve DNS, connect to the database, and make HTTPS requests to external services, but blocks everything else.
Advanced Selector Patterns
Network policies support sophisticated label selectors that let you create flexible rules:
# Allow access based on multiple labels
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: complex-selector-policy
spec:
podSelector:
matchLabels:
app: api-server
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchExpressions:
- key: tier
operator: In
values: ["frontend", "mobile-app"]
- key: version
operator: NotIn
values: ["deprecated"]
ports:
- protocol: TCP
port: 8080
This policy allows access from pods that are either frontend or mobile-app tier, but not if they’re marked as deprecated.
IP Block Policies for Legacy Integration
Sometimes you need to allow traffic from specific IP ranges, especially when integrating with legacy systems:
# Allow traffic from specific IP ranges
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-legacy-systems
spec:
podSelector:
matchLabels:
app: legacy-integration
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 192.168.1.0/24
except:
- 192.168.1.5/32 # Exclude compromised host
ports:
- protocol: TCP
port: 8080
This allows traffic from the 192.168.1.0/24
network except for the specific host 192.168.1.5
.
Policy Ordering and Conflicts
Network policies are additive—if multiple policies select the same pod, the union of all their rules applies. This can lead to unexpected behavior:
# Policy 1: Allow frontend access
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend
spec:
podSelector:
matchLabels:
app: api
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
---
# Policy 2: Allow monitoring access
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring
spec:
podSelector:
matchLabels:
app: api
ingress:
- from:
- podSelector:
matchLabels:
app: prometheus
Both policies select pods with app: api
, so those pods can receive traffic from both frontend pods and Prometheus pods.
Testing and Validation
Network policies can be tricky to debug. Here’s my systematic approach to testing them:
# Create a test pod for network debugging
kubectl run netshoot --image=nicolaka/netshoot -it --rm -- /bin/bash
# Test connectivity between specific pods
kubectl exec -it frontend-pod -- curl backend-service:8080
# Check if a policy is selecting the right pods
kubectl describe networkpolicy my-policy
# See which policies apply to a pod
kubectl get networkpolicy -o yaml | grep -A 10 -B 5 "app: my-app"
I also use temporary “debug” policies that log denied connections:
# Temporary policy to see what's being blocked
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: debug-policy
annotations:
debug: "true"
spec:
podSelector:
matchLabels:
app: debug-target
policyTypes:
- Ingress
- Egress
# No rules = deny all, but some CNIs will log denials
CNI Plugin Considerations
Not all CNI plugins support network policies, and those that do may have different capabilities:
- Calico: Full network policy support, including egress rules and IP blocks
- Cilium: Advanced features like L7 policies and DNS-based rules
- Weave: Basic network policy support
- Flannel: No network policy support (needs Calico overlay)
Check your CNI plugin’s documentation for specific features and limitations.
Performance and Scale Considerations
Network policies add overhead to packet processing. In high-throughput environments, consider:
- Minimizing the number of policies per pod
- Using efficient label selectors
- Avoiding overly complex rules
- Monitoring CNI plugin performance metrics
Most CNI plugins cache policy decisions, so the performance impact decreases over time as the cache warms up.
Compliance and Audit Requirements
Network policies are often required for compliance frameworks like PCI DSS, SOC 2, or HIPAA. Document your policies clearly:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: pci-compliance-policy
annotations:
compliance.framework: "PCI DSS"
compliance.requirement: "1.2.1"
description: "Restrict inbound traffic to cardholder data environment"
spec:
podSelector:
matchLabels:
data-classification: cardholder-data
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
authorized-for-chd: "true"
Monitoring and Alerting
Set up monitoring for network policy violations:
# Check for policy denials in CNI logs
kubectl logs -n kube-system -l k8s-app=calico-node | grep -i deny
# Monitor policy changes
kubectl get events --field-selector reason=NetworkPolicyUpdated
Many organizations set up alerts for:
- New network policies being created
- Policies being deleted
- High numbers of denied connections
- Pods without any network policy coverage
What’s Next
Network policies provide the foundation for secure networking, but they’re just one piece of the puzzle. In the final part, we’ll explore advanced networking patterns including service mesh integration, multi-cluster networking, and troubleshooting complex networking issues in production environments.
The key to successful network policy implementation is starting simple and iterating. Begin with basic segmentation between tiers, test thoroughly, and gradually add more sophisticated rules as your understanding grows.