🚀 Reverse Proxy – Explained Simply 🚀 A reverse proxy is a component that sits between users (clients) and applications. When a user sends a request, it does not go directly to the application. Instead, the request first goes to the reverse proxy, and the reverse proxy sends it to the correct server or container. 💡 Why do we need a reverse proxy? If all requests are sent to one server or one container, it can become overloaded and may crash. A reverse proxy helps by distributing traffic equally across multiple servers or containers. 🔹 Simple examples: ✅ Nginx Reverse Proxy Routes requests to different backend services Example: /api → API service /app → Frontend service ✅ Docker & Microservices Reverse proxy (Nginx / Traefik) sends traffic to multiple containers Load is shared, so the application stays stable ✅ Kubernetes Ingress Acts as a reverse proxy in Kubernetes Routes traffic to services using domain names or paths ✅ Cloudflare Works as a reverse proxy in front of applications Provides security, SSL, caching, and load balancing 🔹 Main benefits of Reverse Proxy: Load balancing Better performance High availability Improved security 📌 In short, a reverse proxy helps applications handle more traffic safely and efficiently. #DevOps #ReverseProxy #LoadBalancing #Kubernetes #Docker #Nginx #CloudComputing #Microservices #LearningDevOps #TechBasics #DevOpsJourney
Vinuthna K.’s Post
More Relevant Posts
-
🚀 Day 16: Scaling with Nginx | 100 Days of DevOps with KodeKloud 🚀on road for day 17 Today was all about moving beyond "it works on my machine" to "it scales for everyone." I dived into Nginx—not just as a web server, but as a high-performance Load Balancer (LBR). The Mission: I was tasked with configuring an Nginx LBR to distribute incoming traffic across three different application servers in the Stratos DC. This setup is crucial for ensuring High Availability (HA) and preventing any single server from becoming a bottleneck. What I did: Installation: Set up Nginx on the dedicated LBR server using sudo yum install -y nginx. Upstream Configuration: Defined the upstream block in /etc/nginx/nginx.conf to include the IP addresses and specific ports of the application servers. Proxying Traffic: Used the proxy_pass directive within the location / block to route all client requests to the defined upstream backend. Verification: Validated the configuration with nginx -t and performed a live test using curl to confirm requests were being successfully distributed across all backend nodes. Why this matters in DevOps: Scalability: We can now horizontally scale by simply adding more servers to the upstream pool. Reliability: If one app server fails, the LBR (with health checks) ensures traffic only goes to healthy nodes. Efficiency: Nginx's event-driven architecture handles thousands of concurrent connections with a tiny memory footprint. Huge thanks to KodeKloud for the hands-on lab that made these concepts stick! #100DaysOfDevOps #Nginx #LoadBalancing #DevOps #CloudComputing #Linux #Scalability #LearningByDoing #KodeKloud
To view or add a comment, sign in
-
🚨 The 502 That Wasn’t a Kubernetes Problem This week I ran into one of those bugs that makes you question everything. I had just deployed a staging environment: • Multiple services • Ingress configured • TLS working • Pods running Everything “looked” healthy The backend API was accessible. The services had endpoints. Pods were running fine. But then… 💥 502 Bad Gateway Only on the dashboard. At first, I went through the usual checklist: Service exists? ✅ Endpoints attached? ✅ Pods running? ✅ Correct port mapping? ✅ Everything looked perfect, which made it worse. Then I noticed something interesting. Static assets were loading fine. Middleware logs showed the request hitting the pod. So the request path was: Browser → Ingress → Service → Pod ✅ That ruled out routing. So what was breaking? Then I looked at the request headers. And there it was. Multiple __txn_ cookies. A very large first_session cookie. Auth cookies. Tracking cookies. Combined size? Huge. Modern dashboards love cookies. NGINX… not so much. The default NGINX Ingress buffer sizes are surprisingly small (designed years ago for simpler apps). When the browser sent all those cookies, the total header size exceeded the buffer limit. NGINX couldn’t fully read the request. Result? 👉 502 Bad Gateway 👉 “Invalid upstream response.” Not because Kubernetes was broken. Not because the pod crashed. But because the memory buffer wasn’t big enough. The fix? Increase the proxy buffer sizes in the Ingress annotations: 🔸 proxy-buffer-size 🔸 proxy-buffers-number 🔸 proxy-busy-buffers-size After that? Everything worked instantly. No code changes. No pod restarts. No architecture redesign. Just understanding what was actually failing. If you're building and breaking things in staging, you're doing the work that actually makes you better. #Kubernetes #DevOps #CloudEngineering #NGINX #Debugging #Backend #Infrastructure
To view or add a comment, sign in
-
-
Ingress Nginx is officially frozen. 🛑 The most popular Kubernetes controller is now in maintenance. The ecosystem is shifting toward the Gateway API. Here is exactly what you need to know: 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗵𝗮𝗽𝗽𝗲𝗻𝗶𝗻𝗴: Ingress Nginx has reached its architectural limit. It will no longer receive any new features. The community is moving to the Gateway API or other ingress controllers. 𝗪𝗵𝗮𝘁 𝘁𝗵𝗲 𝗻𝗲𝘄 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗯𝗿𝗶𝗻𝗴𝘀: • Native support for TCP and UDP routing. • An end to "Annotation Hell" in YAML files. • Clean separation between platform and developer roles. • Built-in support for complex canary rollouts. 𝗪𝗵𝗮𝘁 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 𝘁𝗼 𝗱𝗼: 1. Audit your current Ingress resources immediately. 2. Launch a pilot project using the Gateway API. 3. Stop adding complex annotations to legacy controllers. 4. Update your platform team on the migration path. 5. Execute the migration The era of the simple Ingress is ending. Modern infrastructure requires a modern API. Are you already testing the Gateway API? or using any other solution in your team?👇 #Kubernetes #DevOps #CloudNative #GatewayAPI #PlatformEngineering #SRE
To view or add a comment, sign in
-
-
𝗛𝗧𝗧𝗣 𝗦𝘁𝗮𝘁𝘂𝘀 𝗖𝗼𝗱𝗲𝘀 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗱 𝗦𝗶𝗺𝗽𝗹𝘆 In DevOps, HTTP status codes are not just numbers. They are signals from your system. If you understand them, you troubleshoot faster. Here’s the simple breakdown : 𝟭𝘅𝘅 – 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝟭𝟬𝟬 → I got your request, keep going. 👉 Rare in daily troubleshooting, but part of how HTTP works internally. 𝟮𝘅𝘅 – 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝟮𝟬𝟬 → Everything is working Example: Website loads, API responds correctly 𝟮𝟬𝟭 → Resource created Example: New user created via API 👉 Used in: Load balancer health checks, API validation 𝟯𝘅𝘅 – 𝗥𝗲𝗱𝗶𝗿𝗲𝗰𝘁𝗶𝗼𝗻 𝟯𝟬𝟭 → Permanent redirect (HTTP → HTTPS) 𝟯𝟬𝟮 → Temporary redirect (Login → Dashboard) 👉 Used in: Nginx, Ingress, routing rules 𝟰𝘅𝘅 – 𝗖𝗹𝗶𝗲𝗻𝘁 𝗘𝗿𝗿𝗼𝗿𝘀 𝟰𝟬𝟬 → Bad request (Invalid JSON) 𝟰𝟬𝟭 → Unauthorized (Token expired) 𝟰𝟬𝟯 → Forbidden (Permission denied) 𝟰𝟬𝟰 → Not found (Wrong route / Missing deployment) 👉 Often IAM, RBAC, or API request issues 𝟱𝘅𝘅 – 𝗦𝗲𝗿𝘃𝗲𝗿 𝗘𝗿𝗿𝗼𝗿𝘀 (𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗔𝗹𝗲𝗿𝘁𝘀) 𝟱𝟬𝟬 → App crashed 𝟱𝟬𝟮 → Backend unreachable 𝟱𝟬𝟯 → Service overloaded / Not ready 𝟱𝟬𝟰 → Timeout (DB or API slow) 👉 Used in: Kubernetes, Load Balancers, Cloud apps 𝗪𝗵𝘆 𝗧𝗵𝗶𝘀 𝗠𝗮𝘁𝘁𝗲𝗿𝘀 𝗶𝗻 𝗗𝗲𝘃𝗢𝗽𝘀 When production breaks: 4xx → Check request & permissions 5xx → Check servers, containers, logs If you understand these codes, you debug faster. And in DevOps, faster debugging = better reliability. #DevOps #Cloud #Kubernetes #SRE #APIs
To view or add a comment, sign in
-
-
🚀 Microservices Resilience Series Rate Limiting — Protecting Systems from Overload No system has infinite capacity. If too many requests hit your service at once, even a healthy system can collapse. Rate limiting acts like a traffic controller. 🔹 What is Rate Limiting? It restricts how many requests a client can make in a given time window. Example: 100 requests per minute per user Beyond that limit: Requests are delayed or rejected. This protects your system from overload. 🔹 Why it matters Without rate limiting: Traffic spikes → resource exhaustion → outage With rate limiting: Traffic stays controlled System remains stable Fair usage is enforced 🔹 Real-world scenario During a sale event, thousands of users hit the checkout API. Without rate limiting: Database crashes With rate limiting: Requests are queued/throttled System survives peak load Users may see slowdowns, but the platform stays online. 🔹 Benefits ✔ Prevents overload ✔ Ensures fair resource usage ✔ Improves stability ✔ Protects backend services Tools like API Gateway, NGINX, and Resilience4j support rate limiting in production systems. 💡 Industry principle: Protect the system first, optimize performance second. #Microservices #SystemDesign #SpringBoot #Java #Scalability #BackendEngineering
To view or add a comment, sign in
-
Successfully deployed 3 production-ready applications on a single server with full CI/CD, domain mapping, and SSL at AJ CodeToCure Technologies. Recently worked on setting up a multi-project production environment on a single AWS Lightsail Ubuntu instance — fully containerized using Docker and automated through CI/CD pipelines. Here’s what was implemented: ✅ Deployed 3 independent Node.js applications on one server instance, each running in isolated Docker containers with dedicated ports. ✅ Configured multiple subdomains via Hostinger DNS (A Records), mapped to a static IP for stable and reliable routing. ✅ Set up Nginx as a reverse proxy to route domain-based traffic internally: app1.domain.com → Port 3000 app2.domain.com → Port 3001 app3.domain.com → Port 3002 This ensured clean traffic handling without exposing internal application ports. ✅ Generated and installed HTTPS certificates for all domains using Certbot to enable secure encrypted connections. ✅ Integrated GitHub Actions with SSH-based workflows to: • Automatically pull code on push • Build Docker containers • Restart services without downtime No manual deployment required after setup. ✅ Provisioned MySQL on the server and migrated existing databases securely into the production environment. This setup now supports efficient resource utilization, automated deployments, secure HTTPS access, and scalable traffic management for production applications. #AWS #Docker #NodeJS #Nginx #CICD #DevOps #BackendDeveloper #CloudComputing #GitHubActions
To view or add a comment, sign in
-
-
Nginx is showing a 502 Bad Gateway error, even though the Nginx services are running. In many cases, this error is not caused by Nginx itself. It usually means Nginx is unable to reach the upstream backend service. Before restarting anything, it’s helpful to check: • Whether the backend is listening on the expected port • The proxy_pass host and port configuration • Docker networking (if containers are involved) • /var/log/nginx/error.log for messages such as: connect() failed (111: Connection refused) 113: No route to host I’ve written a step-by-step troubleshooting guide covering both reverse proxy and Docker setups, including tested commands and safe reload practices. The full guide is shared in the comments below. #nginx #devops #docker #sre #backend #webserver
To view or add a comment, sign in
-
Kubernetes Ingress acts as the gateway to your cluster, managing how external traffic reaches internal services. Instead of exposing every service individually, Ingress provides a single entry point with smart routing rules. ✨ Why Ingress matters: 🔹 Host-based & path-based routing 🔹 SSL/TLS termination at the edge 🔹 Centralized traffic management 🔹 Better security & scalability With an Ingress Controller (like NGINX or Traefik), you can efficiently control traffic flow to multiple services running inside your Kubernetes cluster — clean, secure, and production-ready. 📌 If you’re building cloud-native applications, mastering Ingress is a must! #Kubernetes #DevOps #CloudNative #Ingress #K8s #Containerization #SoftwareEngineering
To view or add a comment, sign in
-
-
🚨 Stop. NGINX Is NOT Retiring in Kubernetes. Let’s get the facts straight. ingress-nginx is retiring. NGINX is not. Two different controllers. Different ownership. Different lifecycle. ❌ ingress-nginx Community maintained Annotation-heavy Retiring due to maintainer bandwidth ✅ NGINX Ingress Controller (F5/NGINX) Actively developed Production supported Gateway API aligned What this actually means: Kubernetes networking is evolving. Ingress → Gateway API Annotations → Declarative Routes Controller-specific logic → Role-oriented architecture HTTPRoute. GRPCRoute. TLSRoute. Policy attachment. Cross-namespace delegation. Separation of infra and app responsibilities. This isn’t a shutdown. It’s an architectural shift. If you’re still running ingress-nginx in production - start planning, not panicking. Architect forward. #Kubernetes #GatewayAPI #PlatformEngineering #NGINX #CloudNative #DevOps
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Great explanation of reverse proxies! 👍 We've worked with several DevOps experts who specialize in implementing reverse proxies for improved load balancing and security. Loved this post. If you’re exploring next steps, connect with a vetted expert in minutes: https://gopluto.ai/user-query/reverse-proxy-explained-5e59?utm_source=linkedin&utm_medium=comment