🚀 Just built and deployed my own AWS EC2-like cloud platform — NimbusCloud! Over the past few weeks, I worked on understanding how cloud services actually work under the hood — and ended up building a mini version of EC2 from scratch. 💡 What it does: Launch compute instances (powered by Docker containers) Start, Stop, and Delete instances (full lifecycle management) Connect to instances via a browser-based Linux terminal Execute real Linux commands using xterm.js + WebSockets Fully deployed on AWS EC2 and accessible publicly 🌐 Live Demo: 👉 http://3.17.204.2:5000 (Anyone can try launching and connecting to instances) ⚙️ Tech Stack: Flask (Backend / API layer) Docker (Compute layer – simulating EC2 instances) xterm.js + WebSockets (Real-time terminal) HTML, CSS, JS (Frontend) AWS EC2 (Deployment) 🔥 What I learned: How EC2-like services manage compute resources How to connect frontend ↔ backend ↔ infrastructure Real-time communication using WebSockets Debugging real-world issues (routing, Docker behavior, deployment) Importance of proper backend serving instead of static file handling 📌 Key Highlight: Building a web-based terminal where users can run commands directly inside their instances — similar to AWS EC2 Instance Connect — was the most exciting part. 🚧 Next Improvements: Add authentication & security layers Instance monitoring dashboard Persistent shell sessions This project helped me move beyond tutorials and think like a DevOps + Backend engineer. Would love your feedback 🙌 #DevOps #CloudComputing #AWS #Docker #Flask #WebSockets #FullStack #Projects #LearningInPublic
Mohammad Amaan’s Post
More Relevant Posts
-
One of those days where I just had the urge to build something personal. Had an idea for an app, started building it, and once it was done I thought — why not use this as a real AWS deployment project? So that’s what I did. 🔧 Infrastructure: • VPC with public & private subnets + custom route tables • Internet Gateway for public access, NAT routing for private resources ⚙️ Stack: • Frontend → AWS Amplify (with built-in CI/CD) • Backend → Node.js on EC2 (public subnet, PM2 for process management) • Database → RDS on private subnet (zero direct internet exposure) • Auth → Amazon Cognito • File storage → S3 (public assets) Keeping the DB isolated in a private subnet while giving it controlled internet access via route table config was a key focus — security without sacrificing functionality. PM2 keeps the backend resilient, and Cognito removes the overhead of building auth from scratch. 📌 What's next: • Move backend behind an Application Load Balancer + private subnet • Add CloudFront CDN in front of S3 and Amplify • Introduce auto-scaling for the EC2 layer • Set up CloudWatch monitoring & alerts • Migrate toward containerization with ECS or explore serverless with Lambda • Terraform to automate and codify the entire infrastructure Every project teaches you something new about trade-offs. This one was about balancing simplicity with production-readiness. You can check it out here https://lnkd.in/epsw3xH7 PS: deleting link soon once i destroy resources because of cost #AWS #CloudArchitecture #FullStack #DevOps #NodeJS #RDS #Amplify #Cognito
To view or add a comment, sign in
-
-
Built a small personal project this weekend to connect a few things I work with daily. The idea was simple: provision an EC2 instance on AWS using Terraform, set up a Kubernetes cluster on it using Minikube, and deploy a containerized React app from Docker Hub. A clean end-to-end flow: Terraform handles the infra (VPC, subnet, security groups, EC2), kubectl applies the manifest, and the app runs as a pod exposed via a NodePort service. The whole thing is on GitHub with a proper gitignore so no credentials or state files are committed. Sometimes the best way to understand how pieces fit together is to just wire them up yourself. GitHub: https://lnkd.in/gQk9TQn7 #Terraform #Kubernetes #AWS #DevOps #SRE
To view or add a comment, sign in
-
-
🔴 Not a tutorial-based project built from scratch Over the last 2 days, I worked on automating a full-stack application using AWS (EC2, S3, ECS Fargate, VPC, IAM, Security Groups), Docker, Terraform, GitHub Actions, and Nginx. While building the project, I faced several issues not very complex, but enough to reinforce how real-world debugging actually works. 🧩 Day 1: Networking Reality Check 🔴 Problem: - Application was not accessible via browser (timeout) - But curl localhost worked inside EC2 ❌ Misconception: “If the app runs on the server, it should be accessible publicly.” ✅ Solution: - Fixed VPC, Subnet, Internet Gateway, and Route Table - Opened port 80 in Security Group - Ensured EC2 was in a public subnet ✔ Result: Application became publicly accessible 🧩 Day 2: Deployment & Execution Issues 🔴 Problem: - Docker container running but not accessible from browser - Inconsistent behavior with port access ❌ Misconception: - Terraform apply = everything works automatically - docker run = app is publicly accessible 👉 Actual issues: - EC2 was not fully initialized - Incorrect port exposure / Nginx confusion - Mixing old and new infrastructure resources ✅ Solution: - Waited for EC2 health checks (2/2) - Correct Docker port mapping (-p 80:80) - Clean reset using Terraform - Explicit HTTP usage ✔ Result: Frontend successfully served via public IP 🎯 Key Takeaways: - Running app ≠ Public accessibility - AWS networking (VPC, routing, SG) is critical - Terraform provisions infrastructure, not application runtime - Proper container port exposure is essential - Timing and deployment sequence matter Building from scratch (instead of following tutorials blindly) really shows where things break and more importantly, how to fix them. #DevOps #AWS #Terraform #Docker #ECS #CloudComputing #FullStack #Debugging #LearningInPublic #SoftwareEngineering #fixbug #productionlevel
To view or add a comment, sign in
-
I just deployed my first Dockerized app to AWS EC2 with a fully automated CI/CD pipeline. 🚀 Here's what I built: 🐳 Containerized a Next.js app using a multi-stage Dockerfile ⚙️ Set up GitHub Actions to automatically deploy on every git push 🖥️ Hosted on AWS EC2 with proper security group configuration 📜 Wrote a custom deploy.sh script — zero manual work The best part? I push code → it's live in minutes. No SSH. No manual commands. Nothing. Problems I ran into (and fixed): ❌ npm dependency conflicts → fixed with --legacy-peer-deps ❌ Docker storage exhausted on EC2 → fixed with docker system prune ❌ Port conflicts → debugged with docker ps and cleaned up ❌ EC2 RAM exhausted during build → added 2GB swap memory ❌ Disk full → expanded EBS volume from 6GB to 16GB Every error taught me something new. I'm a CSE student working as a Key Account Manager by day — but DevOps is where I'm heading. Building one project at a time. 🔧 🔗 GitHub: https://lnkd.in/gaP8yv55 #DevOps #Docker #AWS #EC2 #GitHubActions #CICD #LearningInPublic #CloudComputing #CSE
To view or add a comment, sign in
-
Companies are bleeding money on AWS bills. Not because the cloud is expensive. Because the infrastructure wasn't built with cost in mind from the start. I wanted to understand that problem from the inside, so I built CloudCost, a multi-tier web app where FinOps, security, and resilience were the requirements, not the afterthoughts. Here's what I focused on: → The whole thing runs at roughly $1/day idle. Every single infrastructure decision has a cost reason behind it. → Auto Scaling Group that scales out at 70% CPU and scales back in at 30%. No idle capacity sitting around burning money. → Two layers of self-healing. Docker restarts a crashed container in seconds. ASG replaces a failed instance in minutes. Zero manual intervention either way. → RDS password lives only in Secrets Manager. EC2 fetches it at boot through an IAM role scoped to that single secret ARN. Nothing in code, nothing in env vars, nothing in Terraform files. → Full network isolation. RDS has no public IP. EC2 is unreachable from the internet directly. Everything goes through the ALB. → CloudWatch alarms wired directly to scaling policies, 7-day log retention, basic monitoring only. Detailed monitoring costs extra and 5-minute intervals are enough. → Jenkins running locally in Docker. No extra EC2 spend for the build server. FinOps, security, and resilience are not things you bolt on later. This project was built around that belief. Code + full documentation : https://lnkd.in/daBxh_9z #AWS #DevOps #CloudComputing #FinOps #Terraform #Jenkins #Python
To view or add a comment, sign in
-
Built an end-to-end EC2 Governance Engine on AWS using Python, Terraform, Lambda, EventBridge, SNS, Slack, and S3. This project scans EC2 instances across all states, applies governance rules, snapshots stopped instances, optionally terminates approved ones, generates CSV reports, stores them in S3, and sends notifications by email and Slack. What I liked most about this build was combining infrastructure automation with real operational governance, not just deployment. It was a great hands-on way to work with Lambda packaging, Terraform workflows, reporting, notifications, and cloud cost/control practices. Tech used: AWS Lambda EC2 EBS EventBridge SNS S3 Terraform Python GitHub Actions Slack API Always growing through building real-world cloud projects. Github Link : https://lnkd.in/grx7mGHQ #AWS #Terraform #Python #DevOps #CloudComputing #Lambda #InfrastructureAsCode #GitHubActions #EC2 #CloudEngineering
To view or add a comment, sign in
-
-
🚀 Day 23 of 100 Days of DevOps 🚨 I was managing… 50 EC2 configs Nginx setup Auto Scaling Load Balancer Every deploy felt like a nightmare. 💀 One mistake = downtime. Then I discovered something surprising… 👉 You don’t always need to “manage infrastructure” ⚡ Enter: AWS Elastic Beanstalk ⚡ What changed instantly: • Deployment → git push → live app • Scaling → automatic (ASG built-in) • Load balancing → handled for you • Monitoring → CloudWatch integrated 🧠 The shift that changed everything: Stop managing servers Start deploying applications ⚙️ What Beanstalk actually does: Upload code → 👉 Provisions: • EC2 • Load Balancer • Auto Scaling • Monitoring Automatically. 🔥 The part most people miss: Elastic Beanstalk is PaaS → You control the app → AWS controls the infrastructure 💡 Best for: Web apps, APIs, quick deployments ⚔️ But here’s the REAL decision: When should you NOT use Beanstalk? → Need full control? → EC2 → Need containers at scale? → ECS/EKS → Event-driven apps? → Lambda 💡 Choosing the wrong tool = bad architecture 💣 Reality check: Most beginners try to learn EVERYTHING at once. Top engineers ask: 👉 What is the simplest tool that solves this problem? 📌 My takeaway (Day 23): If you're over-engineering your deployment… 👉 you’re slowing yourself down 📚 I turned this into a visual comic (super easy to understand) Comment "BEANSTALK" and checkout bellow 🔥 Let’s grow together 🚀 #Day23 #100DaysOfDevOps #AWS #ElasticBeanstalk #DevOps #CloudComputing #SystemDesign #LearnInPublic
To view or add a comment, sign in
-
🚀 Successfully deployed my first Node.js backend to AWS EC2! After years of running apps on localhost, I finally took the leap into cloud deployment. Here's what the journey taught me: 𝗪𝗵𝗮𝘁 𝗜 𝗕𝘂𝗶𝗹𝘁: Full-stack fitness tracking platform with AI-powered workout coaching Backend API: Node.js + Express Database: MongoDB Atlas AI: Google Gemini with RAG for personalized recommendations 𝗧𝗵𝗲 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 𝗣𝗿𝗼𝗰𝗲𝘀𝘀: ✅ Launched EC2 instance (Ubuntu t3.micro - free tier!) ✅ Configured Security Groups for network access ✅ Set up SSH key-based authentication ✅ Installed Node.js and dependencies ✅ Implemented PM2 for process management ✅ Configured auto-restart on server reboot 𝗞𝗲𝘆 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴𝘀: • SSH keys > passwords • Use environment variables for secrets (never commit .env) 💡 Process Management is Critical Without PM2, the app stops when SSH disconnects With PM2, it keeps running in the background 💡 Cloud Basics Matter Understanding ports, networking, and instance lifecycle is key 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 𝗙𝗮𝗰𝗲𝗱: ❌ First attempt: Forgot to open port 5000 in Security Groups Lesson: Network access needs proper configuration ❌ PM2 stopped working after reboot Lesson: Always run pm2 startup and pm2 save 𝗡𝗲𝘅𝘁 𝗨𝗽: 📦 Deploying Angular frontend 🔄 Setting up CI/CD pipeline 🔒 Adding HTTPS for secure connections 𝗙𝗼𝗿 𝗙𝗲𝗹𝗹𝗼𝘄 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀: If you’ve been thinking about trying cloud deployment, just start. Launch an instance, experiment, break things, and learn along the way. Also curious — for those who’ve worked with different platforms, which do you prefer for beginners: AWS, Azure, or something else? #CloudComputing #WebDevelopment #FullStackDeveloper #DevOps #NodeJS #LearningInPublic #SoftwareEngineering #BuildInPublic
To view or add a comment, sign in
-
When I started learning AWS, I was completely lost. 😅 200+ services. No idea which one to use. No idea where each fits. Should I use EC2 or Lambda? RDS or DynamoDB? What even is CloudFront? I struggled for weeks — and I know I'm not alone in this. So I decided to build something that I WISH existed when I was starting out. 👇 🚀 An interactive HTML guide that covers: ✅ Which AWS service does what — in plain English ✅ Where to host your Frontend (S3 + CloudFront) ✅ Where to host your Backend (EC2 / Beanstalk / Lambda) ✅ Where to store your Database (RDS / DynamoDB) ✅ How to INTEGRATE all three together (with real code!) ✅ Step-by-step deployment process — Phase by Phase No jargon. No confusion. Just simple explanations + real use cases. You can search any service, click to expand, and see exactly when and where to use it. 🔗 Download the free interactive file → https://lnkd.in/dS5B5bMi If this helps even ONE person avoid the confusion I went through — that's enough for me. 🙌 Save this post for your AWS learning journey! 📌 #AWS #CloudComputing #AWSCertification #LearnInPublic #DevOps #CloudArchitecture #WebDevelopment #TechCommunity
To view or add a comment, sign in
-
-
Excited to share CloudNotes — my latest full-stack serverless project on AWS! As I deepen my journey into Cloud Engineering and DevOps, I aimed to create something genuinely production-ready — not just a tutorial clone. The Architecture: - React + TypeScript SPA - CloudFront CDN - API Gateway - Lambda (Node.js) - DynamoDB Engineering Challenges I Solved: - OOM build crash on t2.micro — fixed with Linux Swap and NODE_OPTIONS heap tuning - CORS preflight failures — resolved in API Gateway and Lambda response headers - SPA 403 on refresh — addressed with CloudFront Custom Error Response rules Every problem reinforced one key insight: Cloud Engineering is about understanding how data flows securely and reliably between services. GitHub: [https://lnkd.in/gDFSvH5e] #AWS #CloudEngineering #DevOps #Serverless #React #TypeScript #Lambda #DynamoDB #LearningJourney
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development