Looks like it’s really the end of an era for Kubernetes Ingress. ingress-nginx is moving into maintenance mode from March 2026 — no new features after that and more importantly, no new security patches. I’ve been using Ingress for so long that it feels weird to see it finally winding down, but I guess it was bound to happen sooner or later. On the bright side, Kubernetes has been pushing the Gateway API for a while now, and honestly, it’s shaping up to be a much cleaner and more powerful way to handle traffic. Most vendors and controllers support it already, so it feels like the right time to start shifting over. If you haven’t explored it yet, definitely worth taking a look. 🔗 More details: https://lnkd.in/g6bmu5PC Time for upgrades 😅
Ingress-nginx moves to maintenance mode, Gateway API gains traction
More Relevant Posts
-
End of an era for Kubernetes #ingress, and time for massive upgrades :) ingress-nginx is moving into maintenance mode in March 2026. No new features after then, and, more importantly, no new security fixes! Good news, though: there's lots of support for Gateway API, the new and improved way of exposing L7 apps in Kubernetes! Learn more: https://lnkd.in/g6bmu5PC
To view or add a comment, sign in
-
Kubernetes SIG Network and the Security Response Committee has been announced the upcoming retirement of Ingress NGINX. They will maintain this to March 2026. And the good part is the existing Ingress NGINX deployments will keep functioning as they do today, and all installation artifacts will remain available. For more info please have a look on this link : https://lnkd.in/gTr99Gzp https://lnkd.in/gbfHJev6 https://lnkd.in/gtGzYj4N #Kubernatives #Devops
To view or add a comment, sign in
-
🚀 Speed Up Your CI/CD with Varnish Orca Dependency fetching shouldn’t slow your pipelines. Varnish Orca is a Virtual Registry Manager that caches and accelerates build & runtime artifacts — from Docker images and npm packages to Helm charts, Go modules, and more. Deploy it close to your runners or developers, and watch build times shrink while uptime and consistency improve. 💡 Tech Advantages: ⚡ CI/CD acceleration through intelligent caching 🌐 Supports Docker, NPM, Go, Helm, Maven, PyPI & more 📊 Built-in observability with OpenTelemetry 🛡️ Resilient even during origin downtime 🧩 Programmable and flexible edge deployment 💼 Business Advantages: 💸 Cut registry traffic and infrastructure costs by 75%+ 🧠 Reduce developer friction with faster feedback loops 🔓 Avoid vendor lock-in — route across multiple registries 🕵️♀️ Gain visibility and control over your software supply chain Get started in seconds: docker pull varnish/orca --platform linux/amd64 Or learn more: 👉 https://lnkd.in/dMm2DkFS #DevOps #CICD #Caching #VarnishOrca #SoftwareSupplyChain #Infrastructure
To view or add a comment, sign in
-
Great news for EKS users: the Gateway API can replace the need for Ingress controllers like NGINX or Traefik in some scenarios. https://lnkd.in/gEwaS4dj
To view or add a comment, sign in
-
🚨 Ingress-NGINX Is Being Retired — What You Need to Know 🚨 Kubernetes teams worldwide are facing a major change: Ingress-NGINX, one of the most popular ingress controllers, is officially being retired. This marks the end of an era for many clusters relying on NGINX for routing, SSL termination, and traffic management — but also opens doors to newer, more scalable, and cloud-native alternatives. 🧩 What’s Happening The Kubernetes community has decided to deprecate the NGINX Ingress Controller (maintained under Kubernetes org) due to: Lack of active long-term maintainers Difficulty keeping up with modern Gateway API standards Security and compatibility maintenance overhead If you’re still using kubernetes/ingress-nginx, it’s time to plan your migration. 🔁 Top Alternatives (and How to Transition) 1️⃣ Kong Ingress Controller Why: Built on the powerful Kong Gateway, offers robust authentication, rate limiting, and observability. How to Implement: Deploy via Helm: helm repo add kong https://charts.konghq.com helm repo update helm install kong/kong --generate-name --set ingressController.installCRDs=false Update your Ingress manifests to use: ingressClassName: kong 2️⃣ Traefik Why: Lightweight, easy to configure, supports Let’s Encrypt and dynamic routing. How to Implement: helm repo add traefik https://lnkd.in/gG24fhRh helm install traefik traefik/traefik Update your ingress definitions to: ingressClassName: traefik 3️⃣ HAProxy Ingress Why: Strong performance, native HAProxy integration, detailed metrics. How to Implement: helm repo add haproxytech https://lnkd.in/gkd2YsAA helm install haproxy-ingress haproxytech/kubernetes-ingress Use: ingressClassName: haproxy 4️⃣ Gateway API (Recommended by Kubernetes) Why: The future of Kubernetes ingress. It’s vendor-neutral, modular, and more expressive than classic Ingress. How to Implement: Install Gateway API CRDs: kubectl apply -k "https://lnkd.in/gZtE99AV" Choose a Gateway implementation (Kong, Traefik, GKE Gateway, etc.) Migrate your Ingress to Gateway resources (Gateway, HTTPRoute). ⚙️ Migration Tips Audit existing Ingress resources and annotations — note NGINX-specific configs. Choose an ingress controller aligned with your cloud provider or security policies. Test your new setup in a staging cluster before switching production traffic. Monitor logs, latency, and TLS termination during the transition. 🌐 In Short Ingress-NGINX retirement is a reminder that Kubernetes networking is evolving — toward Gateway API and cloud-native controllers that are more secure, scalable, and flexible. If you’re still on Ingress-NGINX, now’s the perfect time to modernize your ingress layer.
To view or add a comment, sign in
-
An insightful discussion between Oleg Šelajev and Jim Clark reveals the capabilities of Docker’s MCP Toolkit and Gateway. These tools simplify the discovery, management, and operation of MCP servers. I found it interesting that the MCP Toolkit can enhance operational efficiency significantly. What are your thoughts on how tools like these can transform workflows in tech? Read more: https://lnkd.in/dWc5-qVs
To view or add a comment, sign in
-
🚀 𝐓𝐞𝐫𝐫𝐚𝐟𝐨𝐫𝐦 𝟏.𝟏𝟒: 𝐌𝐨𝐯𝐢𝐧𝐠 𝐁𝐞𝐲𝐨𝐧𝐝 𝐃𝐞𝐜𝐥𝐚𝐫𝐚𝐭𝐢𝐯𝐞 𝐈𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 HashiCorp has released* Terraform 1.14, and this update marks a major step forward in how we interact with existing infrastructure and extend capabilities beyond the traditional CRUD (Create, Read, Update, Delete) model. 🔍 𝐐𝐮𝐞𝐫𝐲𝐢𝐧𝐠 𝐄𝐱𝐢𝐬𝐭𝐢𝐧𝐠 𝐈𝐧𝐟𝐫𝐚𝐬𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 One of the most powerful new features is the ability to query and filter existing resources through new *.tfquery.hcl files. The new terraform query command allows you to list existing infrastructure and even automatically generate the configuration needed to import it into Terraform. This simplifies the process of integrating “legacy” environments and provides a clearer view of your current infrastructure state. 🧩 𝐌𝐨𝐫𝐞 𝐀𝐜𝐜𝐮𝐫𝐚𝐭𝐞 𝐈𝐦𝐩𝐨𝐫𝐭𝐬 𝐰𝐢𝐭𝐡 𝐍𝐞𝐰 𝐑𝐏𝐂 The new GenerateResourceConfiguration RPC enables providers to generate more precise configurations during imports, reducing manual work and improving consistency when migrating resources into Terraform. ⚡️ 𝐍𝐞𝐰 𝐓𝐨𝐩-𝐋𝐞𝐯𝐞𝐥 𝐁𝐥𝐨𝐜𝐤: 𝐀𝐜𝐭𝐢𝐨𝐧𝐬 Terraform 1.14 introduces a new top-level Actions block to support imperative operations that were previously outside Terraform’s declarative model. Providers can now define actions such as aws_lambda_invoke or aws_cloudfront_create_invalidation. This allows you to trigger specific side effects within a resource’s lifecycle or directly from the CLI using the -invoke flag. In short, Terraform is evolving from defining what infrastructure should exist to also defining how and when certain actions should occur. 🛠️ 𝐄𝐧𝐡𝐚𝐧𝐜𝐞𝐦𝐞𝐧𝐭𝐬 𝐚𝐧𝐝 𝐅𝐢𝐱𝐞𝐬 Some notable improvements include: terraform test now shows expected diagnostics in verbose mode and ignores the prevent_destroy attribute during cleanup. Offline query validation via terraform validate -query. Added support for the AWS European Sovereign Cloud. Improved variable inheritance handling during terraform import. CLI now summarizes how many actions were executed during terraform apply. ⚙️ 𝐔𝐩𝐠𝐫𝐚𝐝𝐞 𝐍𝐨𝐭𝐞𝐬 Terraform 1.14 may reduce operation parallelism when running inside containers with CPU bandwidth limits. Also, building Terraform 1.14 requires macOS Monterey or later, due to its move to Go 1.25. As always, it’s recommended to test the new version in staging before rolling it out to production. Terraform continues to evolve toward a more flexible and connected ecosystem combining the power of declarative infrastructure with direct interaction capabilities. ⚠️ *𝐍𝐨𝐭𝐞: 𝐓𝐞𝐫𝐫𝐚𝐟𝐨𝐫𝐦 𝟏.𝟏𝟒 𝐢𝐬 𝐜𝐮𝐫𝐫𝐞𝐧𝐭𝐥𝐲 𝐚 𝐩𝐫𝐞-𝐫𝐞𝐥𝐞𝐚𝐬𝐞 𝐯𝐞𝐫𝐬𝐢𝐨𝐧 (𝐑𝐂). 𝐒𝐨𝐦𝐞 𝐟𝐞𝐚𝐭𝐮𝐫𝐞𝐬 𝐨𝐫 𝐛𝐞𝐡𝐚𝐯𝐢𝐨𝐫𝐬 𝐦𝐚𝐲 𝐜𝐡𝐚𝐧𝐠𝐞 𝐛𝐞𝐟𝐨𝐫𝐞 𝐭𝐡𝐞 𝐟𝐢𝐧𝐚𝐥 𝐫𝐞𝐥𝐞𝐚𝐬𝐞. 📎 Official source: https://lnkd.in/dK5R5BKp
To view or add a comment, sign in
-
We've got a new blog out looking at #Kubernetes versions in use in real-world clusters, and it's actually quite good news from a security perspective. With the addition of extended support for the major managed Kubernetes distributions, it looks like most of the cluster's were seeing are running on supported versions. That's quite an improvement over the last couple of years. https://lnkd.in/eczb39UT
To view or add a comment, sign in
-
Just pushed a big update to my package server project — and finally wrote up the details. The short version: - Added proper **authentication** options: none, basic, API-backed JWT, and OIDC. - Introduced clean **/health** and **/ready** endpoints for Kubernetes. - Reworked logging to be Apache-style (with real client IPs). - Improved **mirror chaining** and upstream control for better caching. - Cleaned up CI so Docker images publish automatically. If you’re using it to host your own packages or OCI images, upgrades are simple: switch to `auth: basic`, add an `.htpasswd`, and point your probes at the new endpoints. The full post dives into the details and upgrade notes: Updating the Package Server – Auth, Probes, and a Bit of Cleanup https://lnkd.in/gu9X3RET Code: https://lnkd.in/gQ2wP7hi Image: `docker pull jlcox1970/package-server:` #containers #registry #oci #mirror #kubernetes #devops #supplychain #security
To view or add a comment, sign in
-
Port forwarding in Kubernetes creates a secure tunnel between your local machine and cluster resources through the API server. It's useful for debugging, database access, and testing services without modifying network configurations or setting up load balancers. The kubectl port-forward command establishes a temporary TCP connection that works for pods, services, or deployments. Common scenarios include local development with remote services, accessing internal dashboards, and connecting database clients. If you're developing with Kubernetes you will almost certainly use port forwarding. Flavius Dinu wrote this great guide covering syntax, real-world use cases, and alternatives like LoadBalancers and Ingress controllers. Check it out! https://lnkd.in/edVGb9H3
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development