Gateway API is what Ingress should have been from day one.

I don’t say that lightly. I’ve spent years wrangling Kubernetes Ingress resources, writing controller-specific annotations, and debugging routing issues that only existed because the Ingress spec was too simple for real-world traffic management. Gateway API fixes nearly every complaint I’ve ever had, and if you’re still running pure Ingress in production, it’s time to start planning your migration.

This isn’t a “maybe someday” technology. Gateway API hit GA in Kubernetes 1.26, major controllers already support it, and the ecosystem is moving fast. I’ve migrated three production clusters over the past year and I’m not looking back.


The Annotation Hell That Broke Me

Here’s my war story. We had a cluster running about forty microservices behind NGINX Ingress Controller. Straightforward enough at first. Then requirements started piling up. Rate limiting on the auth service. Custom timeouts on the upload endpoint. CORS headers for the frontend. Canary deployments for the payments team. Redirect rules for the marketing folks.

Every single one of those requirements meant another annotation. Our Ingress manifests turned into walls of nginx.ingress.kubernetes.io/something-obscure. Here’s a taste of what one of them looked like:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: api-ingress
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "120"
    nginx.ingress.kubernetes.io/cors-allow-origin: "https://app.example.com"
    nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE"
    nginx.ingress.kubernetes.io/cors-allow-headers: "Authorization, Content-Type"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"
    nginx.ingress.kubernetes.io/rate-limit: "100"
    nginx.ingress.kubernetes.io/rate-limit-window: "1m"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      more_set_headers "X-Request-ID: $req_id";
spec:
  rules:
    - host: api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: api-service
                port:
                  number: 8080

Twelve annotations. And that was one of the simpler services. The payments Ingress had twenty-three. None of this was portable. If we ever wanted to switch from NGINX to Traefik or Envoy Gateway, we’d be rewriting every single manifest. I wrote about some of these controller differences in my Ingress controllers comparison.

The breaking point came when a junior engineer copy-pasted an annotation wrong and silently broke rate limiting on our auth endpoint for two days. No validation error, no warning. Annotations are just strings — Kubernetes doesn’t care if they’re garbage. That’s when I started seriously looking at Gateway API.


What Gateway API Actually Gets Right

Gateway API isn’t just “Ingress v2.” It’s a complete rethink of how Kubernetes handles traffic routing. The core insight is separating concerns into distinct resources with clear ownership boundaries. Instead of cramming everything into one Ingress object with a pile of annotations, you get a layered model that maps to how teams actually work.

The three key resources are:

  • GatewayClass — defines the controller implementation (managed by infrastructure providers)
  • Gateway — configures listeners, ports, TLS termination (managed by cluster operators)
  • HTTPRoute — defines routing rules, matches, filters (managed by application developers)

This separation matters enormously. Your platform team owns the Gateway. Your app teams own their HTTPRoutes. Nobody’s stepping on each other’s toes, and nobody needs cluster-admin privileges just to add a routing rule. If you’ve read my piece on Kubernetes networking fundamentals, you’ll recognise this as the kind of clean abstraction that was always missing from the networking layer.

Here’s what a basic Gateway looks like:

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: production
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: production-gateway
  namespace: gateway-infra
spec:
  gatewayClassName: production
  listeners:
    - name: https
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
          - name: wildcard-cert
            namespace: cert-manager
      allowedRoutes:
        namespaces:
          from: Selector
          selector:
            matchLabels:
              gateway-access: "true"
    - name: http
      protocol: HTTP
      port: 80
      allowedRoutes:
        namespaces:
          from: Same

Notice the allowedRoutes section. That’s the multi-tenancy story right there. The platform team decides which namespaces can attach routes to this gateway. Application teams don’t need to know or care about TLS certificates, listener configuration, or any of the infrastructure plumbing. They just create an HTTPRoute in their namespace and it works.


HTTPRoute: Where the Magic Happens

HTTPRoute is where you’ll spend most of your time, and it’s genuinely pleasant to work with compared to Ingress annotations. Everything that used to be a controller-specific annotation is now a first-class, validated, portable API field.

Here’s an HTTPRoute that replaces that annotation-heavy Ingress I showed earlier:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api-route
  namespace: api-team
spec:
  parentRefs:
    - name: production-gateway
      namespace: gateway-infra
  hostnames:
    - "api.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /v2/upload
      filters:
        - type: RequestHeaderModifier
          requestHeaderModifier:
            set:
              - name: X-Request-Source
                value: gateway
      backendRefs:
        - name: upload-service
          port: 8080
          weight: 90
        - name: upload-service-canary
          port: 8080
          weight: 10
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: api-service
          port: 8080

Look at that. Traffic splitting is a first-class concept with weight. Header modification is a typed filter, not a string annotation. Path matching has explicit types. And every single field here gets validated by the Kubernetes API server. Typo in a field name? Rejected. Invalid weight value? Rejected. That silent annotation failure I mentioned? Can’t happen here.

The parentRefs field is how an HTTPRoute attaches to a Gateway. The Gateway’s allowedRoutes policy determines whether the attachment is permitted. It’s a two-way handshake — the route requests attachment, the gateway grants it. This is a massive improvement for service-level networking in multi-team clusters.


TLS Done Properly

TLS configuration with Ingress was always a bit awkward. You’d reference a Secret in the Ingress spec, which meant application developers needed access to TLS certificates, or you’d rely on cert-manager annotations and hope everything wired up correctly. Gateway API cleans this up by putting TLS where it belongs — on the Gateway listener, managed by the platform team.

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: production-gateway
  namespace: gateway-infra
spec:
  gatewayClassName: production
  listeners:
    - name: app-https
      hostname: "*.app.example.com"
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
          - name: wildcard-app-cert
    - name: api-https
      hostname: "api.example.com"
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
          - name: api-cert

Different listeners can have different certificates, different hostnames, and different access policies. You can even do TLS passthrough for services that handle their own termination. Application developers never touch a certificate. They write their HTTPRoute, reference the gateway, and the platform team’s TLS config just works.

I’ve found this model pairs beautifully with cert-manager. The platform team sets up Certificate resources in the gateway namespace, cert-manager handles renewal, and app teams are completely insulated from the whole process. It’s the kind of separation of concerns that makes running multi-team clusters actually manageable.


Traffic Splitting and Canary Deployments

This is where Gateway API really shines compared to Ingress. Traffic splitting is built into the spec as a core concept, not bolted on through annotations.

I run canary deployments on every production service now. Here’s the pattern I use:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: payments-route
  namespace: payments-team
spec:
  parentRefs:
    - name: production-gateway
      namespace: gateway-infra
  hostnames:
    - "api.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /payments
          headers:
            - name: X-Canary
              value: "true"
      backendRefs:
        - name: payments-v2
          port: 8080
    - matches:
        - path:
            type: PathPrefix
            value: /payments
      backendRefs:
        - name: payments-v1
          port: 8080
          weight: 95
        - name: payments-v2
          port: 8080
          weight: 5

Two things happening here. First, any request with the X-Canary: true header goes straight to v2 — that’s for internal testing. Second, general traffic gets a 95/5 split. I gradually shift the weights as confidence builds. No annotation gymnastics, no controller-specific canary configuration. Just weights and matches.

This integrates cleanly with progressive delivery tools like Argo Rollouts and Flagger. They can manipulate HTTPRoute weights programmatically through the Kubernetes API. With Ingress, these tools had to understand controller-specific annotation formats. With Gateway API, there’s one standard interface. If you’re running a service mesh like Istio, Gateway API even provides a path to unify your north-south and east-west traffic management under one API.


Multi-Tenant Gateway Patterns

Running a shared gateway for multiple teams is where the Gateway API’s design really pays off. I’ve settled on a pattern that works well for organisations with distinct product teams sharing a cluster.

The platform team deploys the GatewayClass and Gateway:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: shared-gateway
  namespace: gateway-infra
spec:
  gatewayClassName: production
  listeners:
    - name: team-a
      hostname: "*.team-a.example.com"
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
          - name: team-a-wildcard
      allowedRoutes:
        namespaces:
          from: Selector
          selector:
            matchLabels:
              team: "a"
    - name: team-b
      hostname: "*.team-b.example.com"
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
          - name: team-b-wildcard
      allowedRoutes:
        namespaces:
          from: Selector
          selector:
            matchLabels:
              team: "b"

Each team gets their own listener with their own hostname pattern and their own namespace selector. Team A can only attach routes to the team-a listener. Team B can only attach to team-b. There’s no way for one team to accidentally (or deliberately) hijack another team’s traffic. Combine this with Kubernetes network policies and you’ve got solid tenant isolation without needing separate clusters.

Each team then manages their own routes independently:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: dashboard-route
  namespace: team-a-apps
  labels:
    team: "a"
spec:
  parentRefs:
    - name: shared-gateway
      namespace: gateway-infra
      sectionName: team-a
  hostnames:
    - "dashboard.team-a.example.com"
  rules:
    - backendRefs:
        - name: dashboard
          port: 3000

The sectionName field targets a specific listener on the gateway. Clean, explicit, and auditable. I can look at any HTTPRoute and immediately understand which gateway listener it’s attached to and which team owns it.


Migrating from Ingress Without Breaking Everything

I won’t pretend migration is trivial, but it’s very doable if you approach it methodically. Here’s the process I’ve followed across three clusters.

First, install your Gateway API CRDs and pick a controller that supports both Ingress and Gateway API. Most major controllers do now — Envoy Gateway, NGINX Gateway Fabric, Traefik, Istio, and Cilium all have solid Gateway API support. I’ve had the best experience with Envoy Gateway, but your mileage may vary depending on your existing stack.

Second, deploy your Gateway alongside your existing Ingress controller. Don’t rip anything out yet. Run them in parallel. I typically point a test subdomain at the new gateway first:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api-test-route
  namespace: api-team
spec:
  parentRefs:
    - name: staging-gateway
      namespace: gateway-infra
  hostnames:
    - "api-test.example.com"
  rules:
    - backendRefs:
        - name: api-service
          port: 8080

Point api-test.example.com at the new gateway’s load balancer. Test everything. Compare behaviour with the existing Ingress-backed endpoint. I usually run both in parallel for at least a week per service, checking logs and metrics for any discrepancies.

Third, migrate services one at a time. Start with the simplest ones — services with minimal annotation customisation. Convert each Ingress to an HTTPRoute, verify, then delete the old Ingress. Don’t batch them. If something goes wrong, you want to know exactly which migration caused it.

Fourth, handle the tricky annotations. This is where you’ll spend the most time. Some annotations map directly to Gateway API features (traffic splitting, header modification, redirects). Others might require controller-specific policy resources. For example, rate limiting isn’t in the core Gateway API spec yet, but most controllers offer it through policy attachment:

apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: api-rate-limit
  namespace: api-team
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: api-route
  rateLimit:
    type: Global
    global:
      rules:
        - limit:
            requests: 100
            unit: Minute

Yes, this is controller-specific. But it’s a typed, validated resource — not a string annotation. And the Gateway API community is actively working on standardising these policies. The direction is clear even if not everything is GA yet.


What’s Coming Next

Gateway API isn’t standing still. The roadmap is packed with features that’ll make it even more compelling:

  • GRPCRoute is already in GA, giving gRPC services first-class routing without HTTP path hacks.
  • TCPRoute and UDPRoute are progressing through the experimental channel for non-HTTP workloads.
  • Policy attachment is being standardised so things like rate limiting, authentication, and retry policies get a consistent API across controllers.
  • Service mesh integration through the GAMMA initiative is bringing Gateway API to east-west traffic, not just north-south. This could eventually replace a lot of the custom CRDs that service meshes currently use.
  • Backend TLS policy for configuring mTLS between the gateway and backend services is maturing rapidly.

The momentum behind Gateway API is real. Every major controller vendor is investing heavily. The CNCF is behind it. And the community is shipping at a pace that Ingress never managed. I expect that within a year or two, starting a new cluster with plain Ingress will feel as dated as writing raw iptables rules instead of using NetworkPolicy.


Should You Migrate Now?

If you’re starting a new cluster, use Gateway API. Full stop. There’s no reason to begin with Ingress in 2026.

If you’re running existing clusters, the answer depends on your pain level. If you’ve got a handful of simple Ingress resources with minimal annotations, there’s no rush. They’ll keep working. But if you’re drowning in controller-specific annotations, fighting with multi-team access control, or wanting proper traffic splitting without third-party hacks — start migrating now. The tooling is mature enough, the spec is stable, and the migration path is well-documented.

Gateway API is what Ingress should have been from day one. It respects the reality that infrastructure teams and application teams have different responsibilities. It makes traffic management a first-class, validated, portable concept instead of an afterthought stuffed into annotations. And it’s only getting better.

I’ve migrated my production clusters and I’m not going back. You probably shouldn’t wait much longer either.