AWS Project: Highly Available Web Application Recently completed an AWS project on building a Highly Available Web Application Over the past few days, I worked on designing and deploying a production-style architecture on AWS, focusing on scalability, security, and real-world implementation. What I implemented: • Custom VPC with public and private subnets • EC2 instances deployed in private subnets • Bastion host for secure SSH access • Application Load Balancer with path-based routing • Separate target groups for text and image services • Auto Scaling Group based on CPU utilization • Multi-AZ setup for high availability Traffic Flow: User → ALB → Target Groups → EC2 Instances I also faced and resolved a few practical issues like health check failures, 403 errors, and routing misconfigurations — which really helped me understand how things work in real scenarios. GitHub Repository: https://lnkd.in/dvk5q3FH Included architecture diagram and working screenshots in the repo. (Created in draw.io) This project gave me a solid understanding of how scalable and secure architectures are built on AWS. Would appreciate any feedback or suggestions! #AWS #CloudComputing #DevOps #Linux #AutoScaling #LoadBalancing #Project
AWS Highly Available Web Application Architecture
More Relevant Posts
-
🚀 Deployed a Node.js app on AWS ECS (EC2 Self-Managed) — without a NAT Gateway. Here's what I built for BrewHaven Coffee: ✅ ECS EC2 cluster (c7i-flex.large) with bridge mode networking ✅ 11 VPC Interface + Gateway Endpoints replacing a NAT Gateway entirely ✅ Multi-stage Docker build → ECR (linux/amd64, ~46MB image) ✅ ALB with dynamic port mapping (32768-65535) for bridge mode ✅ Route 53 Alias A record → custom domain https://lnkd.in/gB9wkikp ✅ CloudWatch container logging + SSM Session Manager (no bastion host) 🔐 Security highlights: → ECS tasks in private subnets — zero public IP → VPC Endpoints locked down to HTTPS/443 from app SG only → 3-tier security group model (ELB → APP → Endpoint) 💡 Key lessons learned along the way: → EC2 bridge mode needs dynamic port range (32768-65535) open on the app SG — missed this initially → ARM64 vs x86_64 mismatch between local Mac (Apple Silicon) and ECS EC2 instance caused silent failures — fixed with explicit linux/amd64 build target → ECR pull requires BOTH ecr.dkr AND ecr.api endpoints — one alone isn't enough → S3 Gateway Endpoint is free and essential for ECR image layer pulls → Alias A record (not CNAME) for ALB — no charge, no TTL delay, works at zone apex. This project forced me to really understand VPC networking at depth — what each endpoint does, why security group chaining matters, and how ECS agent registration works behind the scenes. Full deployment guide documented with architecture, configs, and a 14-point verification checklist. Happy to share if useful. #AWS #DevOps #ECS #Docker #Kubernetes #CloudArchitecture #VPC #NodeJS #CloudComputing #SRE
To view or add a comment, sign in
-
I am excited to share my latest project where I explored the concept of Blue-Green Deployment to achieve zero-downtime updates for web applications! In this project, I simulated a real-world scenario where a company needs to switch between two different application versions (a "Klassy Cafe" site and a "Villa" rental site) seamlessly without affecting the user experience. Tech Stack & Services Used: Compute: 4 EC2 Instances (2 Blue, 2 Green) running on Ubuntu. Networking: Application Load Balancer (ALB) for intelligent traffic routing. Traffic Management: Target Groups to organize server fleets. DNS: Route 53 for domain management (sidcloudaws.online) with Hostinger integration. Security: AWS Certificate Manager (ACM) for transitioning to HTTPS. Key Takeaways: Zero Downtime: Learned how to switch traffic from the "Green" environment (Cafe App) to the "Blue" environment (Villa App) instantly by modifying Load Balancer listener rules. Scalability: Configured target groups to handle multiple instances, ensuring the application remains highly available. This project was a great way to dive deep into AWS Infrastructure and understand how modern DevOps teams manage application lifecycle and deployments. Check out the video below to see the seamless transition in action! 👇 #AWS #CloudComputing #DevOps #BlueGreenDeployment #EC2 #Route53 #LearningEveryday #TechProject
To view or add a comment, sign in
-
🚀 Scaling your app? Don’t let local storage break your Load Balancer. You have a growing website where users upload files (like PDFs, images, or profiles). To handle the traffic, you add a Load Balancer and a second web server. Everything seems great, until users start complaining: "I just uploaded my document, and now it’s gone!" What happened? The "Local Storage Trap." 🔴 The Problem: User connects -> Load Balancer sends them to Server A. They upload report.pdf. It's saved locally on Server A. Minute later, they request the file -> Load Balancer sends them to Server B. Server B looks at its own disk... 404 Not Found. The file is stuck on Server A. 🟢 The Solution: Centralized Storage. When scaling horizontally, your servers must share a "source of truth" for user files. There are two standard approaches: 1️⃣ Network File Systems (e.g., AWS EFS): The "Easy" Way: You mount a shared drive (like EFS) to the /uploads folder on all servers. Pros: No code changes are required. The servers think it’s a local disk. Cons: Higher latency; can be more expensive at scale. 2️⃣ Object Storage (e.g., AWS S3): The "Best Practice" Way: You modify your application code to upload files directly to an S3 bucket instead of a local directory. Pros: Highly reliable, virtually unlimited storage, lower cost, and built-in features like Content Delivery Network (CDN) integration. Cons: Requires code modification to implement the SDK. If you have a load balancer, local storage is your enemy. Move your uploads to shared storage (EFS or S3) so your users (and servers) stay sane. #WebDevelopment #CloudArchitecture #AWS #DevOps #Scaling #SystemDesign
To view or add a comment, sign in
-
-
Live demo is now available for my AWS Architecture Builder & Validator portfolio project. Built with React, TypeScript, Vite, and React Flow, the app lets users visually build AWS architectures, validate common service relationships, and work through guided practice exercises. My main focus so far was on functionality, logic, and user flow first. Further refinements will follow. Live demo: https://lnkd.in/dNt8M3rG Feedback is very welcome, especially around usability, architecture logic, and the overall learning experience. #React #TypeScript #AWS #Frontend #CloudComputing
To view or add a comment, sign in
-
🚀 **AWS Hands-on: High Availability Setup using Load Balancer & NGINX** Built a simple yet powerful **high-availability web architecture** on AWS to understand real-world traffic distribution. 🔧 **Architecture Overview:** * 2 × EC2 Instances (Amazon Linux) * NGINX Web Server on both instances * Application Load Balancer (ALB) * Custom HTML responses from each server ⚙️ **Configuration Steps:** * Installed and started NGINX on both instances * Deployed unique `index.html` on each server to identify responses * Created a Target Group and attached both EC2 instances * Configured ALB listener on **HTTP (Port 80)** 💻 **Sample Setup Command:** ```bash sudo -i yum update -y yum install nginx -y systemctl start nginx echo "Server 1 Response" > /usr/share/nginx/html/index.html ``` 📊 **Result (Round Robin Traffic):** Accessing the Load Balancer DNS shows alternating responses from both servers — confirming proper **traffic distribution**. 🧠 **Key Technical Learnings:** * Load Balancer distributes traffic using **Round Robin algorithm** * Improves **fault tolerance & availability** * Decouples client traffic from backend servers * Basic foundation for **scalable microservices architecture** 📌 This is a fundamental building block for designing **resilient cloud systems**. #AWS #DevOps #CloudArchitecture #NGINX #LoadBalancer #EC2 #HighAvailability #CloudEngineering #CloudDevOpsHub
To view or add a comment, sign in
-
-
Hands-on: End-to-End Container Lifecycle Management on AWS EC2 🐳☁️ I just completed a technical lab focused on the deployment and management of containerized applications using #Docker and #AWS. Rather than using managed services, I took a "from the ground up" approach to understand the underlying architecture of the Docker engine and its interaction with Linux environments. Technical Highlights: Environment Provisioning: Configured an AWS EC2 (Ubuntu) instance, executed shell scripts for Docker engine installation, and managed user group permissions to enable non-root execution. Image Architecture: Utilized Dockerfile manifests to build custom images (docker build -t), managing layers and optimizing disk usage. Networking & Port Mapping: Implemented port forwarding (Mapping host port 80 to container port 8080) to expose Apache Tomcat services to the public internet via IPv4. Resource Orchestration: Managed simultaneous container lifecycles, utilizing the Docker CLI for process monitoring (ps -a), signal handling (stop), and resource pruning (rm/rmi) to ensure zero environment drift. Moving from manual CLI management toward automated CI/CD workflows is the next step. Excited to keep building! 🏗️ #DevOps #CloudEngineering #Docker #AWS #LinuxAdministration #Containerization #SRE
To view or add a comment, sign in
-
Interview Questions (Important) Q1: What is EKS? 👉 Managed Kubernetes service by AWS Q2: Who manages control plane? 👉 AWS Q3: What are worker nodes? 👉 EC2 or Fargate instances running pods Q4: Difference between Pod and Container? 👉 Pod = wrapper, Container = actual app Q5: How do you expose application? 👉 Using Service (LoadBalancer / Ingress) 🚀 Real Scenario (Interview Level) 👉 “Your pod is not accessible” Check: Pod status (kubectl get pods) Service type (ClusterIP vs LoadBalancer) Security group rules Ingress configuration Logs (kubectl logs) 🧩 Simple Analogy Kubernetes = Operating System EKS = Managed OS by AWS Pod = Application Node = Server
To view or add a comment, sign in
-
🔴 Not a tutorial-based project built from scratch Over the last 2 days, I worked on automating a full-stack application using AWS (EC2, S3, ECS Fargate, VPC, IAM, Security Groups), Docker, Terraform, GitHub Actions, and Nginx. While building the project, I faced several issues not very complex, but enough to reinforce how real-world debugging actually works. 🧩 Day 1: Networking Reality Check 🔴 Problem: - Application was not accessible via browser (timeout) - But curl localhost worked inside EC2 ❌ Misconception: “If the app runs on the server, it should be accessible publicly.” ✅ Solution: - Fixed VPC, Subnet, Internet Gateway, and Route Table - Opened port 80 in Security Group - Ensured EC2 was in a public subnet ✔ Result: Application became publicly accessible 🧩 Day 2: Deployment & Execution Issues 🔴 Problem: - Docker container running but not accessible from browser - Inconsistent behavior with port access ❌ Misconception: - Terraform apply = everything works automatically - docker run = app is publicly accessible 👉 Actual issues: - EC2 was not fully initialized - Incorrect port exposure / Nginx confusion - Mixing old and new infrastructure resources ✅ Solution: - Waited for EC2 health checks (2/2) - Correct Docker port mapping (-p 80:80) - Clean reset using Terraform - Explicit HTTP usage ✔ Result: Frontend successfully served via public IP 🎯 Key Takeaways: - Running app ≠ Public accessibility - AWS networking (VPC, routing, SG) is critical - Terraform provisions infrastructure, not application runtime - Proper container port exposure is essential - Timing and deployment sequence matter Building from scratch (instead of following tutorials blindly) really shows where things break and more importantly, how to fix them. #DevOps #AWS #Terraform #Docker #ECS #CloudComputing #FullStack #Debugging #LearningInPublic #SoftwareEngineering #fixbug #productionlevel
To view or add a comment, sign in
-
🚨 Kubernetes Cluster Issue on AWS EC2 – Need Expert Suggestions 🚨 I recently built a Kubernetes cluster on AWS using two EC2 instances: ✅ 1 Master Node ✅ 1 Worker Node I completed all the required configurations on both servers, installed all necessary components, and successfully connected the worker node to the master node using the join token. After that, everything was working perfectly: ✔ Nodes were created ✔ Pods were running ✔ Deployment was successful ✔ NGINX application was deployed successfully I even tested the application using the Worker Node Public IP with the exposed port, and the NGINX page opened successfully. However, after some time, the entire cluster stopped responding. ❌ Pods were no longer visible ❌ Nodes were not showing ❌ Deployments disappeared from output When I checked again, I got this error: The connection to the server 172.31.x.x:6443 was refused My Question to the Community:- What could be the possible reasons for this issue? Could this happen because of: 🔹 kube-apiserver stopped? 🔹 kubelet service failure? 🔹 Security Group / Firewall issue? 🔹 EC2 restart or resource exhaustion? 🔹 etcd failure? 🔹 Wrong networking configuration? 🔹 Control plane components crash? how to troubleshoot and permanently fix this kind of issue. please comment #Kubernetes #AWS #EC2 #DevOps #CloudComputing #K8s #Docker #NGINX #Linux #Infrastructure #SRE #PlatformEngineering #Kubeadm #CloudEngineer #Troubleshooting #ClusterManagement #Containers #LearningInPublic #TechCommunity #Automation #SystemAdmin #AWSCloud #DevSecOps #CloudNative #KubernetesCluster #AWSDevOps #AmazonWebServices #Containerization #Microservices #Helm #Kubectl #Kubelet #KubeAPI #CloudEngineerLife #SiteReliabilityEngineering #Monitoring #Logging #CI_CD #Jenkins #GitHubActions #Terraform #Ansible #Prometheus #Grafana #Ingress #LoadBalancer #Networking #LinuxAdmin #ServerManagement #Scalability #HighAvailability #DisasterRecovery #TechJobs #OpenToWork #CareerGrowth #ITInfrastructure #CloudSecurity #ZeroDowntime #TechLearning #FutureOfTech #EngineerLife #ProblemSolving #TechSupport #Innovation #CloudOps #KubernetesSecurity #ContainerOrchestration
To view or add a comment, sign in
-
-
🚀 Deployed Application Using AWS CI/CD Pipeline (S3 as Source) Successfully built and deployed a web application using a fully automated CI/CD pipeline on AWS — without relying on GitHub. 🔧 Architecture Used: Source: Amazon S3 Pipeline: AWS CodePipeline Deployment: AWS CodeDeploy / EC2 Compute: Amazon EC2 (Apache Web Server) 📦 Workflow: Uploaded application bundle (HTML, CSS, JS + appspec.yml) to S3 bucket Configured CodePipeline to fetch source directly from S3 Pipeline triggered automatically on object update CodeDeploy deployed application to EC2 instance Application served via Apache on Linux instance ⚙️ Key Highlights: Eliminated dependency on GitHub Fast and simple deployment using S3 as artifact source Automated end-to-end pipeline Scalable and production-ready setup 💡 Use Case: Ideal for scenarios where: Source code is stored in S3 No version control integration required Quick deployments or internal projects 📊 Pipeline Flow: S3 Bucket → CodePipeline → CodeDeploy → EC2 → Live Application ✅ Deployment completed successfully with zero manual intervention! #AWS #DevOps #CI_CD #CodePipeline #CodeDeploy #CloudComputing #S3 #Automation #ElasticBeanstalk
To view or add a comment, sign in
Explore related topics
- Designing For High Availability In Web Applications
- Scalable Architecture With AWS EventBuses
- Strategies for Scaling Software with AWS
- Deploying New AWS Services in Production
- AWS Architecture for Order to Delivery Solutions
- AWS Web Hosting Service Reliability
- Using Cloud Services For Web App Scalability
- High Availability Configurations
- Building Cloud Messaging Architecture With AWS
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Great work Aftab 👏🏻🎉