Companies are bleeding money on AWS bills. Not because the cloud is expensive. Because the infrastructure wasn't built with cost in mind from the start. I wanted to understand that problem from the inside, so I built CloudCost, a multi-tier web app where FinOps, security, and resilience were the requirements, not the afterthoughts. Here's what I focused on: → The whole thing runs at roughly $1/day idle. Every single infrastructure decision has a cost reason behind it. → Auto Scaling Group that scales out at 70% CPU and scales back in at 30%. No idle capacity sitting around burning money. → Two layers of self-healing. Docker restarts a crashed container in seconds. ASG replaces a failed instance in minutes. Zero manual intervention either way. → RDS password lives only in Secrets Manager. EC2 fetches it at boot through an IAM role scoped to that single secret ARN. Nothing in code, nothing in env vars, nothing in Terraform files. → Full network isolation. RDS has no public IP. EC2 is unreachable from the internet directly. Everything goes through the ALB. → CloudWatch alarms wired directly to scaling policies, 7-day log retention, basic monitoring only. Detailed monitoring costs extra and 5-minute intervals are enough. → Jenkins running locally in Docker. No extra EC2 spend for the build server. FinOps, security, and resilience are not things you bolt on later. This project was built around that belief. Code + full documentation : https://lnkd.in/daBxh_9z #AWS #DevOps #CloudComputing #FinOps #Terraform #Jenkins #Python
Optimizing AWS Costs with CloudCost: A FinOps Approach
More Relevant Posts
-
Most deployment problems aren’t code issues — they’re environment issues. I am continuing my AWS cloud program with Week 3 focused on fixing the environment issues with Docker. What changed Instead of configuring EC2 instances manually, I packaged the application into a container and deployed it consistently across environments. What I built • Dockerized a Python Flask app • Deployed it on AWS EC2 behind an ALB • Standardized runtime from local → cloud Architecture User → ALB → EC2 (Docker container) → Application Key insight Containers shift deployment from “setting up machines” to “running defined workloads.” Same app. Same environment. Fewer surprises. Progress hasn’t been strictly linear. I have spent the last weeks working on a new exciting project (a man has to survive) . More on that soon. The focus remains - Building systems that can be deployed, scaled, and operated — not just run. Next: CI/CD to remove manual deployment entirely. Project repo: https://lnkd.in/e2AVntC3 Check it out and let me know your thoughts. #Docker #DevOps #AWS #CloudComputing #MachineLearning #LearningInPublic
To view or add a comment, sign in
-
I just deployed my first Dockerized app to AWS EC2 with a fully automated CI/CD pipeline. 🚀 Here's what I built: 🐳 Containerized a Next.js app using a multi-stage Dockerfile ⚙️ Set up GitHub Actions to automatically deploy on every git push 🖥️ Hosted on AWS EC2 with proper security group configuration 📜 Wrote a custom deploy.sh script — zero manual work The best part? I push code → it's live in minutes. No SSH. No manual commands. Nothing. Problems I ran into (and fixed): ❌ npm dependency conflicts → fixed with --legacy-peer-deps ❌ Docker storage exhausted on EC2 → fixed with docker system prune ❌ Port conflicts → debugged with docker ps and cleaned up ❌ EC2 RAM exhausted during build → added 2GB swap memory ❌ Disk full → expanded EBS volume from 6GB to 16GB Every error taught me something new. I'm a CSE student working as a Key Account Manager by day — but DevOps is where I'm heading. Building one project at a time. 🔧 🔗 GitHub: https://lnkd.in/gaP8yv55 #DevOps #Docker #AWS #EC2 #GitHubActions #CICD #LearningInPublic #CloudComputing #CSE
To view or add a comment, sign in
-
Day #6 of the 6 Day DevOps Challenge DONE! 🔥 I used a suite of AWS services to build a CI/CD pipeline that automatically builds, tests, and deploys my web application whenever I push code changes to GitHub: 1️⃣ AWS CodePipeline to orchestrate the entire CI/CD process. 2️⃣ AWS CloudFormation to launch the deployment infrastructure. 3️⃣ AWS CodeBuild to compile and package my application. 4️⃣ AWS CodeDeploy to deploy my app to an EC2 instance. 5️⃣ Amazon S3 to store the build artifacts. 6️⃣ Amazon EC2 to host my web application. 7️⃣ AWS IAM to manage permissions. 8️⃣ GitHub to store my code. Check out my documentation to see how I built this! 🙏 Huge thanks to NextWork for their resources and guidance. https://lnkd.in/dC_Xhgp5 #DevOps #AWS #CodePipeline #CodeBuild #CodeDeploy #6DayDevOpsChallenge #NextWork
To view or add a comment, sign in
-
🚀 Just built and deployed my own AWS EC2-like cloud platform — NimbusCloud! Over the past few weeks, I worked on understanding how cloud services actually work under the hood — and ended up building a mini version of EC2 from scratch. 💡 What it does: Launch compute instances (powered by Docker containers) Start, Stop, and Delete instances (full lifecycle management) Connect to instances via a browser-based Linux terminal Execute real Linux commands using xterm.js + WebSockets Fully deployed on AWS EC2 and accessible publicly 🌐 Live Demo: 👉 http://3.17.204.2:5000 (Anyone can try launching and connecting to instances) ⚙️ Tech Stack: Flask (Backend / API layer) Docker (Compute layer – simulating EC2 instances) xterm.js + WebSockets (Real-time terminal) HTML, CSS, JS (Frontend) AWS EC2 (Deployment) 🔥 What I learned: How EC2-like services manage compute resources How to connect frontend ↔ backend ↔ infrastructure Real-time communication using WebSockets Debugging real-world issues (routing, Docker behavior, deployment) Importance of proper backend serving instead of static file handling 📌 Key Highlight: Building a web-based terminal where users can run commands directly inside their instances — similar to AWS EC2 Instance Connect — was the most exciting part. 🚧 Next Improvements: Add authentication & security layers Instance monitoring dashboard Persistent shell sessions This project helped me move beyond tutorials and think like a DevOps + Backend engineer. Would love your feedback 🙌 #DevOps #CloudComputing #AWS #Docker #Flask #WebSockets #FullStack #Projects #LearningInPublic
To view or add a comment, sign in
-
-
🚀 I stopped just studying AWS and built a production-style multi-AZ infrastructure from scratch — here's what I put together using Terraform and GitLab CI/CD. I wanted to understand how real cloud infrastructure works end to end, so I built it myself rather than just reading about it. Here's what's running: 🌐 Network — Custom VPC with public & private subnets across 2 Availability Zones, Internet Gateway, Regional NAT Gateway ⚖️ Compute — Two Auto Scaling Groups (web tier + app tier) spanning both AZs, scaling on CPU thresholds 🔀 Load Balancer — Application Load Balancer with HTTPS, SSL via ACM, and HTTP → HTTPS redirect 🗄️ Database — RDS MySQL with Multi-AZ enabled for failover 📊 Monitoring — 6 CloudWatch alarms with SNS email alerts and automatic scaling triggers 🔁 CI/CD — GitLab pipeline: validate → plan → apply → destroy, with manual gates on apply and destroy. State stored in S3. 💡 Key learnings: Multi-AZ failover works, but introduces ~60s downtime → retry logic matters Scaling is not instant — cooldowns and thresholds need tuning Infrastructure is easy to deploy, but resilience is harder to design Everything is defined as code. It's not perfect and there's plenty I'd do differently next time, but it was one of the best ways I've learned how these pieces actually fit together. 🧭 Architecture — Designed a full architecture diagram to clearly map networking, traffic flow, and failover behavior across AZs (available in the repository). Full code on GitLab → https://lnkd.in/dddezUug What would you improve or add?
To view or add a comment, sign in
-
🚀 AWS Terraform Platform – Production-Style IaC (End-to-End) Over the past few days, I designed and deployed a modular Infrastructure as Code (IaC) platform on AWS — not just provisioning resources, but building a reusable system that can host real applications. 🔧 What I implemented: • VPC with public subnets, routing, and internet gateway • Application Load Balancer (ALB) for traffic distribution • Auto Scaling Group (ASG) with Launch Templates • Fully automated EC2 provisioning using user_data • Remote state management using S3 + DynamoDB (state locking) • Modular Terraform architecture (reusable across environments) 🌐 Result: A working system where: 👉 Traffic flows from ALB → EC2 instances 👉 Instances auto-scale and self-configure 👉 Application is deployed automatically on boot 👉 Accessible via public ALB DNS ⚠️ Key Challenges I Solved: • Fixed missing route table associations (no internet access issue) • Debugged EC2 bootstrap failures via cloud-init logs • Resolved ALB health check failures • Understood ASG replacement behavior & target draining 🧠 Key Learning: This project helped me understand how infrastructure, networking, and application layers interact in real-world deployments. 📌 Next Steps: • Move to private subnet architecture (NAT Gateway) • Deploy containerized application (Docker) • Add HTTPS (ACM + ALB) • Integrate RDS backend 🔗 GitHub Project: https://lnkd.in/gP92uvWS #DevOps #Terraform #AWS #Cloud #InfrastructureAsCode #Kubernetes #CICD
To view or add a comment, sign in
-
-
“It’s running” doesn’t mean “it’s working.” A recent deployment I undertook reinforced something I often emphasize: DevOps isn’t about running commands — it’s about understanding the architecture behind them. I containerized a static website using Nginx, pushed the image to Docker Hub, and deployed it on an AWS EC2 instance. Straightforward on paper — but the value is always in the details: 🔹 Docker groups and permissions matter — especially on remote hosts. 🔹 Container ports and cloud security groups must align. 🔹 PAT-based authentication is now the standard for secure registry access. 🔹 A container can be healthy while the network path is not. 🔹 curl remains one of the most reliable debugging tools we have. Once the image was published and the networking path corrected, the container served the site exactly as expected — a clean, reproducible deployment from image build to browser. The takeaway: Modern deployments reward engineers who understand the entire pipeline — not just the tools within it. Feel free to explore the repo here: 🔗 https://lnkd.in/eT7wcTiv #Docker #AWS #DevOps #CloudEngineering #Containers #Nginx
To view or add a comment, sign in
-
-
Manual infrastructure works. Until you need to recreate it. Week 5: moving to Infrastructure as Code with Terraform. What changed: Instead of building AWS resources through the console, I defined the entire environment as code and deployed it in a repeatable way. What I built: • VPC with public and private subnets across AZs • Internet Gateway and routing • Security groups with controlled access • EC2 instance running nginx • Full lifecycle management (apply → destroy) Key insight: Infrastructure isn’t just about deployment — it’s about reproducibility and control. If you can’t recreate it reliably, you don’t really own it. Design tradeoff: I intentionally omitted a NAT Gateway to control cost. This simplifies the setup, but means private subnets have no outbound internet access. In production, this would be replaced with a NAT Gateway per AZ for availability and isolation. This step completes the shift from: Manual setup → defined infrastructure → repeatable systems Next: connecting this with CI/CD to fully automate infrastructure and application deployment. Check out my project repo: https://lnkd.in/ewApBTqD Happy for any comments on my efforts so far. #Terraform #DevOps #AWS #InfrastructureAsCode #CloudEngineering #LearningInPublic
To view or add a comment, sign in
-
One of those days where I just had the urge to build something personal. Had an idea for an app, started building it, and once it was done I thought — why not use this as a real AWS deployment project? So that’s what I did. 🔧 Infrastructure: • VPC with public & private subnets + custom route tables • Internet Gateway for public access, NAT routing for private resources ⚙️ Stack: • Frontend → AWS Amplify (with built-in CI/CD) • Backend → Node.js on EC2 (public subnet, PM2 for process management) • Database → RDS on private subnet (zero direct internet exposure) • Auth → Amazon Cognito • File storage → S3 (public assets) Keeping the DB isolated in a private subnet while giving it controlled internet access via route table config was a key focus — security without sacrificing functionality. PM2 keeps the backend resilient, and Cognito removes the overhead of building auth from scratch. 📌 What's next: • Move backend behind an Application Load Balancer + private subnet • Add CloudFront CDN in front of S3 and Amplify • Introduce auto-scaling for the EC2 layer • Set up CloudWatch monitoring & alerts • Migrate toward containerization with ECS or explore serverless with Lambda • Terraform to automate and codify the entire infrastructure Every project teaches you something new about trade-offs. This one was about balancing simplicity with production-readiness. You can check it out here https://lnkd.in/epsw3xH7 PS: deleting link soon once i destroy resources because of cost #AWS #CloudArchitecture #FullStack #DevOps #NodeJS #RDS #Amplify #Cognito
To view or add a comment, sign in
-
-
Built and deployed a 3-tier AWS architecture using Terraform that runs a containerized application in a production-style setup. The real challenge wasn’t writing Terraform. It was understanding how all the pieces fit together in a real system. So I designed an architecture where infrastructure doesn’t just exist, it actually runs and serves an application. 🔧 What’s included: - Frontend & backend on Auto Scaling groups (private subnets) - External and internal load balancing - High availability database layer - EC2 instances pulling Docker images from ECR at boot No manual deployment. Every new instance comes up, pulls the container, and starts serving traffic automatically. Biggest takeaway: Understanding how components connect > just knowing how to create them. Next step: adding CI/CD to make deployments fully automated. GitHub: https://lnkd.in/g9wnQPVB #terraform #aws #devops #cloud #infrastructureascode
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development