🚀 Built an Automated CI/CD Pipeline with AWS & GitHub Actions I recently completed a hands-on project where I designed and implemented a complete CI/CD pipeline to deploy a Node.js application on AWS EC2. 🔧 What I built: * A Node.js application running on port 4000 * AWS EC2 (Ubuntu) server setup with proper security groups * Automated deployment using GitHub Actions * Secure SSH-based connection using GitHub Secrets * Process management using PM2 ⚙️ How it works: Every time I push code to the main branch, GitHub Actions automatically: 1. Connects to my EC2 instance via SSH 2. Pulls the latest code 3. Installs dependencies 4. Restarts the application 💡 Challenge I faced: I initially ran into a “Permission denied (publickey)” error during deployment. After debugging, I resolved it by correctly adding the CI/CD SSH key to the EC2 server and setting proper permissions. This was a great real-world troubleshooting experience. 📂 GitHub Repository: https://lnkd.in/g8qMusNS 🎯 Key Takeaways: * Hands-on experience with CI/CD pipelines * Understanding secure SSH-based automation * Debugging real production-like issues * Automating deployments with zero manual intervention This project helped me strengthen my DevOps and SRE skills by working on real-world deployment scenarios. #DevOps #AWS #GitHubActions #CI_CD #SRE #CloudComputing #NodeJS #Automation
Automated CI/CD Pipeline with AWS & GitHub Actions
More Relevant Posts
-
Following up on my last post, here’s how the system actually works. In my previous post, I shared the challenges I faced while building a DevOps pipeline and how I almost gave up before finally getting it to work. Now I want to break down what I actually built and how the system works end-to-end. What the setup does •Provisions an EC2 instance on AWS using Terraform •Configures security groups (SSH + HTTP access) •Installs Docker automatically using user data •Deploys an Nginx container on port 80 •Uses GitHub Actions to automate the entire workflow. What happens on every push •Code is pushed to GitHub •GitHub Actions pipeline is triggered •Terraform initializes and plans infrastructure •EC2 instance is created or updated •Docker is installed automatically •Nginx container is deployed and exposed on port 80. Architecture flow GitHub → GitHub Actions → Terraform → AWS EC2 → Docker → Nginx What this helped me understand After fixing all the failures I mentioned earlier, seeing this flow work end-to-end made things clearer: •How CI/CD connects directly to infrastructure •How Terraform manages cloud resources •How automation removes manual setup •How small misconfigurations can break the entire pipeline. I’m continuing to improve this setup and explore better ways to structure and scale it. Open to feedback and suggestions though. #DevOps #AWS #Terraform #Docker #CI/CD #GitHubActions #InfrastructureAsCode #CloudComputing
To view or add a comment, sign in
-
🚀 CI/CD Pipeline for Automated Portfolio Deployment (GitHub Actions + AWS S3) I built and implemented a CI/CD pipeline to automate the deployment of my personal portfolio website. With every push to my GitHub repository, a GitHub Actions workflow automatically builds and deploys the latest changes to an AWS S3 bucket hosting my static website. ⚙️ Tech Stack: GitHub Actions | AWS S3 | IAM Roles | GitHub Secrets | HTML/CSS/JavaScript 📌 What I Demonstrated: • CI/CD pipeline automation using GitHub Actions • AWS S3 static website hosting & deployment • Secure credential management using IAM & GitHub Secrets • End-to-end DevOps workflow understanding 🎯 Impact: This project helped me gain hands-on experience in building real-world deployment pipelines and strengthened my understanding of DevOps automation on AWS. I am continuously improving my skills in AWS, DevOps, and Infrastructure Automation. 🔗 Project Repository: https://lnkd.in/gMdFK-mq 👨💻 GitHub Profile: https://lnkd.in/gqG_G7Me ⭐ Feel free to follow my GitHub for more DevOps & cloud projects — more coming soon! #DevOps #AWS #GitHubActions #CICD #CloudComputing #AWSDevOps #Automation
To view or add a comment, sign in
-
-
I just deployed my first Dockerized app to AWS EC2 with a fully automated CI/CD pipeline. 🚀 Here's what I built: 🐳 Containerized a Next.js app using a multi-stage Dockerfile ⚙️ Set up GitHub Actions to automatically deploy on every git push 🖥️ Hosted on AWS EC2 with proper security group configuration 📜 Wrote a custom deploy.sh script — zero manual work The best part? I push code → it's live in minutes. No SSH. No manual commands. Nothing. Problems I ran into (and fixed): ❌ npm dependency conflicts → fixed with --legacy-peer-deps ❌ Docker storage exhausted on EC2 → fixed with docker system prune ❌ Port conflicts → debugged with docker ps and cleaned up ❌ EC2 RAM exhausted during build → added 2GB swap memory ❌ Disk full → expanded EBS volume from 6GB to 16GB Every error taught me something new. I'm a CSE student working as a Key Account Manager by day — but DevOps is where I'm heading. Building one project at a time. 🔧 🔗 GitHub: https://lnkd.in/gaP8yv55 #DevOps #Docker #AWS #EC2 #GitHubActions #CICD #LearningInPublic #CloudComputing #CSE
To view or add a comment, sign in
-
I built a "simple" Task Manager API. The app took a day. The DevOps around it nearly broke me. Deployed a full CI/CD pipeline on AWS from scratch — GitHub Actions, Docker, ECR, EC2, Nginx, Prometheus, Grafana, Slack alerts, and auto-rollback. The failures nobody warns you about: ❌ First 6 deploys failed because I forgot package-lock.json. CI doesn't forgive what localhost ignores. ❌ Accidentally committed my Slack webhook to GitHub. Had to rotate everything and build proper secret injection. ❌ 502 Bad Gateway after every deploy — spent hours debugging. The fix? One line: docker-compose restart nginx. Nginx caches old container IPs. ❌ Config drift between local and EC2. Fixed by automating config sync in the pipeline. What actually stuck: ✔ Monitoring isn't optional — it's how you sleep at night ✔ CI/CD isn't "set and forget" — mine took 19 commits to get right ✔ The app was easy. The infrastructure was the real education. 🔗 https://lnkd.in/gfFhBESA If you're learning DevOps — deploy something, break it, fix it at midnight. That's the real learning. #DevOps #AWS #Docker #CICD #LearningInPublic
To view or add a comment, sign in
-
🚀 Week 7 of My DevOps Journey This week was a big milestone — I deployed a Node.js application on AWS EC2 and integrated a real-world payment flow using Stripe (test mode). 📅 Week 7 – What I Learned & Built 🔹 AWS EC2 Deployment • Launched an EC2 instance (Ubuntu, t3.micro) • Configured SSH access using key pair authentication • Installed Node.js, npm, and Git on the server • Cloned and ran my Node.js project on the cloud 🔹 Application Setup & Configuration • Managed environment variables using .env • Configured application port and static directory • Assigned an Elastic IP for consistent access 🔹 Stripe Integration • Integrated Stripe API using test keys • Understood publishable vs secret keys • Built a basic payment flow for testing 🔹 Networking & Security • Configured Security Groups (inbound rules) • Allowed traffic on custom application port • Learned how public IP & ports work together 💡 Key Takeaways • End-to-end deployment of a backend application • Real-world cloud setup and debugging • Secure handling of API keys and environment configs • Understanding infrastructure + application integration 🌐 This week gave me hands-on experience in taking an app from local → cloud → live environment 🚀 Next Goal: Automate deployment using Jenkins CI/CD + Docker #DevOps #AWS #EC2 #NodeJS #Stripe #Cloud #CI_CD #LearningInPublic
To view or add a comment, sign in
-
I architected and deployed a fully automated CI/CD pipeline on AWS, in my DevOps Build Lab today. The pipeline is built for scalability and reliability, ensuring that every code push flows into deployment with zero manual intervention. TECH STACK: - Node.js + MongoDB application (containerized with Docker) - Jenkins for build, test, and image creation - AWS Elastic Container Registry (ECR) for image storage - Deployment to Elastic Container Service (ECS) / Fargate (serverless containers) with optional EKS support - Infrastructure fully provisioned using Terraform - Secure networking with VPC, private subnets, and NAT Gateway - Remote Terraform state (S3 and DynamoDB locking) - HTTPS enabled via AWS Certificate Manager and Application Load Balancer - ECS service auto scaling based on demand - Secrets managed securely with AWS Secrets Manager - Monitoring and Observability using Prometheus and Grafana PIPELINE FLOW (IN SUMMARY): - Every code push triggers Jenkins - Application artifact is built and tested - Docker builds an application image - Image is pushed to AWS ECR - Image is deployed from ECR to ECS (or EKS) and run as container - Running app and environment are monitored and observed using Prometheus and Grafana Link to Repo: https://lnkd.in/eFHBFis7 #Cloud #DevOps #SRE #MyBuildLab
To view or add a comment, sign in
-
-
Manual AWS Console vs Terraform. Here is why I used Terraform for every resource in my Kubernetes project. Manual Console: ✗ No record of what was created ✗ Hard to reproduce exactly ✗ Error-prone when repeated ✗ Cannot share with a team ✗ No audit trail Terraform: ✓ Every resource defined in code ✓ terraform apply — identical setup every time ✓ Full git history of every infrastructure change ✓ Clone the repo and apply — anyone can reproduce it ✓ Zero human error My terraform-sockshop repo provisions the complete AWS infrastructure for a 2-node Kubernetes cluster. 3 resources. 1 command. Reproducible in any AWS account. This is what Infrastructure as Code means in practice. GitHub → https://lnkd.in/gvUd5Rcb #Terraform #IaC #AWS #DevOps #CloudEngineering
To view or add a comment, sign in
-
-
🚨 Why Your Git Push from AWS EC2 Fails (and How to Fix It) I was pushing code from my AWS EC2 + Jenkins setup to GitHub… and suddenly hit this 👇 ❌ rejected (non-fast-forward) ❌ divergent branches ❌ fatal: Need to specify how to reconcile divergent branches At first glance, it looks scary… but here’s the simple truth 👇 🔍 What’s really happening? Your local repo (EC2/Jenkins) and GitHub repo both have commits… but they are on different timelines. 👉 Git refuses to overwrite history (to protect your code) ✅ How I fixed it (step-by-step) ✔️ Step 1: Pull remote changes git pull origin master --allow-unrelated-histories --no-rebase ✔️ Step 2: Resolve merge conflicts (if any) git add . git commit -m "merge resolved" ✔️ Step 3: Push again git push origin master ⚠️ Biggest mistake I made Using: sudo git push 🚫 This caused permission & ownership issues ✅ Fix: sudo chown -R ubuntu:ubuntu ~/git 💡 Key DevOps Learning Git errors are not random — they’re protecting your code Always pull before push in shared environments Avoid sudo with Git (especially on EC2) Understand branch strategy (master vs develop) 🚀 Real-world takeaway When working with Jenkins pipelines + AWS EC2 + GitHub: 👉 Your pipeline is only as strong as your Git workflow 🔥 If you're learning DevOps / AWS / Git, you WILL face this The difference? Now you know how to fix it in minutes. 💬 Have you faced this error before? Comment “GIT” and I’ll share a clean workflow for Jenkins 🚀 #DevOps #AWS #Jenkins #Git #GitHub #CloudComputing #LearningInPublic #100DaysOfCode
To view or add a comment, sign in
-
-
🚀 Built a Production-Ready Terraform Project on AWS (Real DevOps Implementation) As part of strengthening my DevOps expertise, I designed and deployed a modular Terraform project to provision AWS infrastructure — following real-world practices used in organizations. Instead of writing everything in a single file, I implemented a scalable and reusable architecture using Terraform modules 👇 🏗️ What I built: ✔️ VPC with public subnet ✔️ Internet Gateway & Route Table configuration ✔️ Security Group (SSH & HTTP access) ✔️ EC2 instance deployment 📁 Project Approach (Industry-Level): 🔹 Separate modules for VPC and EC2 🔹 Environment-based structure (dev) 🔹 Clean and maintainable code design 💡 Real Challenges I Solved (Hands-on Debugging): 🔸 Fixed invalid AMI issue (region-specific problem) 🔸 Resolved instance type restriction (Free Tier eligibility) 🔸 Handled Git large file error by cleaning .terraform and using .gitignore 👉 These are the exact issues you face in real production environments. 📌 Key Learnings: ✔ Modular Terraform = scalable infrastructure ✔ Proper Git practices are critical in DevOps ✔ Debugging skills matter more than just writing code 🔗 GitHub Project Link: https://lnkd.in/d4JKWgGE #DevOps #Terraform #AWS #InfrastructureAsCode #CloudEngineering #SRE #GitHub #LearningInPublic
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Great work!