The Quiet Death of the Ingress Controller as We Know It
If you have been managing production Kubernetes clusters for more than a minute, you likely have a love-hate relationship with the NGINX Ingress Controller. It’s the reliable old workhorse that’s probably sitting at the front of your stack right now. But here’s the reality check: the community-maintained ingress-nginx project is officially headed for retirement on March 31, 2026. The technical debt has piled too high, the maintainers are burnt out, and frankly, the architecture is hitting a performance ceiling that modern high-traffic workloads simply cannot ignore.
We are moving past the era of managing a sprawl of vendor-specific annotations and wrestling with the Linux kernel’s aging networking stack. The future is the Cilium Gateway API. This isn't just a syntax change; it’s a fundamental shift in how packets move from the wire to your pods. By leveraging eBPF, Cilium effectively turns your ingress from a congested toll booth into a high-speed highway.
Why NGINX is Hits the Wall: The Iptables Tax
Traditional ingress controllers, including NGINX, operate primarily in user-space or rely heavily on iptables and conntrack. When a packet hits your node, the Linux kernel has to evaluate a massive list of rules to figure out where that packet belongs. As your services scale and your rule list grows, you start seeing 'latency jitter'—those annoying micro-spikes in p99 response times that are nearly impossible to debug with standard tools.
Every time NGINX handles a request, the packet undergoes multiple context switches between kernel-space and user-space. In a world where we're aiming for sub-millisecond overhead, this context-switching 'tax' is no longer acceptable. This is where eBPF networking changes the game. Instead of waiting for the packet to climb up the networking stack, Cilium attaches eBPF programs directly to the network interface (NIC). It processes traffic at the lowest level possible, often before the kernel even allocates a socket buffer.
Bypassing the Bottleneck with eBPF and XDP
With the Cilium Gateway API, traffic management is handled by eBPF-powered XDP (eXpress Data Path). In high-throughput environments, this allows Cilium to process over 10 million packets per second. By the time a traditional NGINX controller has even looked at the packet header, Cilium has already routed it. This isn't just 'faster'—it's a different league of efficiency that significantly reduces CPU overhead on your nodes.
The Gateway API: More Than Just a Better Ingress
The transition from Ingress to the Gateway API is often compared to moving from a single monolithic config to a role-oriented architecture. If you've ever had a developer accidentally break global ingress settings because they messed up an annotation, you’ll appreciate this. The Gateway API splits responsibilities into three distinct layers:
- GatewayClass: Defined by infrastructure providers (e.g., Cilium).
- Gateway: Defined by platform engineers to manage the entry point and TLS certificates.
- HTTPRoute: Defined by developers to handle specific service routing and traffic splitting.
As noted in the Cilium 1.15 release, this implementation has reached full compliance with the Gateway API v1.0 standard. This means you get native support for complex traffic patterns—like canary deployments and header-based routing—without needing to learn a proprietary CRD or a messy list of annotations.
Consolidating Your Stack: Reducing Tool Sprawl
One of the biggest headaches for DevOps architects is 'tool sprawl.' Usually, you have a CNI for networking, an Ingress Controller for North-South traffic, and a Service Mesh for East-West traffic. That’s three different control planes to manage, three different sets of metrics, and three different places where things can break.
By adopting the Cilium Gateway API, you integrate your ingress directly into your CNI. Cilium uses the Linux kernel's TPROXY facility to transparently forward traffic to an Envoy proxy instance without the overhead of standard networking hops. Even better, with the 1.16 release and the GAMMA initiative, Cilium is extending this same logic to internal mesh traffic. You get a unified API for all networking, backed by the same high-performance eBPF engine.
Addressing the Nuances: Is It All Sunshine and Rainbows?
I’d be doing you a disservice if I said this migration was effortless. There are legitimate concerns to navigate. First, don't confuse the community ingress-nginx (which is retiring) with the F5-maintained NGINX Ingress Controller. The latter is still supported, but it still lacks the kernel-level integration that makes Cilium so performant.
Second, there is the 'noisy neighbor' problem. Cilium’s default 'shared gateway' model runs an Envoy instance per node. If one service on that node gets hammered with traffic, it could potentially impact the ingress performance of other services. While you can deploy dedicated gateways, it’s a design choice you need to make early on.
Lastly, debugging eBPF can be intimidating. If you’re used to running tcpdump to see why a packet is dropping, you might find that the traffic is 'invisible' because it’s being bypassed in the kernel. This is why Cilium Hubble observability is non-negotiable. Hubble gives you the deep, flow-level visibility into eBPF programs that traditional tools simply cannot reach, making it easier to visualize your traffic gold signals in real-time.
The Road to 2026 Starts Now
The archival of the community NGINX ingress project is a wake-up call for the Kubernetes ecosystem. We can no longer rely on legacy wrappers around 20-year-old technology to handle the demands of modern cloud-native applications. Moving to the Cilium Gateway API isn't just about avoiding a forced migration in 2026; it's about reclaiming your cluster's performance today.
If you're tired of fighting with ingress latency and annotation bloat, it’s time to look at the kernel. Start by deploying a test gateway in a non-production environment and compare your p99s. You might be surprised at how much speed you’ve been leaving on the table.
Ready to ditch the bottleneck? Start by reviewing the CNCF’s guide on the NGINX archival and begin mapping your current Ingress resources to HTTPRoutes. Your future self (and your CPU usage graphs) will thank you.


