Built and Deployed My First End-to-End DevOps Project I just completed a hands-on DevOps project where I built, containerized, and deployed a Flask application with a complete CI/CD pipeline. 🔧 Tech Stack: • Python (Flask) • Docker • Git & GitHub • GitHub Actions (CI/CD) • AWS EC2 💡 What I built: A Flask web app that dynamically displays the current time for: 🇺🇸 USA 🇨🇳 China 🇮🇳 India ⚙️ What makes this project special: Instead of just running locally, I implemented a full deployment pipeline: ✔️ Code pushed to GitHub ✔️ GitHub Actions triggers automatically ✔️ Secure SSH connection to EC2 ✔️ Docker container rebuilds and redeploys ✔️ Application updates live without manual intervention 🚧 Challenges I faced: • Docker container conflicts (port & naming issues) • GitHub authentication & SSH setup • CI/CD pipeline failures and debugging logs • YAML configuration errors 💥 Key Learnings: • Real DevOps is about debugging, not just building • CI/CD pipelines are the backbone of modern deployment • Docker + Automation = powerful combination • Small mistakes in YAML or ports can break entire systems 📈 What’s next: Planning to level this up with: • Nginx reverse proxy • Custom domain + HTTPS • Kubernetes deployment #DevOps #Docker #AWS #GitHubActions #Flask #CI_CD #CloudComputing #LearningInPublic
DevOps Project: Built and Deployed Flask App with CI/CD Pipeline
More Relevant Posts
-
🐳✨ Docker Journey (Level 1 → 8) — Quick Snapshot 🚀 Just wrapped up my Docker learning from basics to DevOps 💪🔥 🔵 Basics 📦 Docker = Container platform 🧱 Container = Isolated environment 💡 Fixes “works on my machine” 🟢 Images & Build 📄 Dockerfile = Recipe 🧩 Image = Blueprint 🚀 Container = Running app 🟡 Core Concepts 🌐 Networking → Containers talk 💾 Volumes → Data stays safe 🧰 Compose → Multi-container apps 🟠 Real Project Learnings 💻 Full Stack (Frontend + Backend + MySQL) ⚠️ depends_on ≠ service ready ✅ Fix: Retry logic + restart policy 🔴 Advanced ⚡ Multi-stage builds (smaller images) 🔐 Security best practices ❤️ Health checks 📦 Private registry 🟣 DevOps Integration 🔁 CI/CD → Jenkins & GitHub Actions ☁️ Deploy on Cloud (use Public IP) 📡 Open ports in firewall ☸️ Kubernetes → Scaling apps 🔥 Complete Flow: Code ➝ Dockerfile ➝ Image ➝ Container ➝ Compose ➝ Deploy ➝ Scale #Docker 🐳 #DevOps 🚀 #Kubernetes ☸️ #CICD 🔁 #Backend 💻 #LearningJourney 🌱
To view or add a comment, sign in
-
-
🚀 𝗙𝘂𝗹𝗹 𝗖𝗜/𝗖𝗗 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 𝘄𝗶𝘁𝗵 𝗝𝗲𝗻𝗸𝗶𝗻𝘀 + 𝗚𝗶𝘁𝗛𝘂𝗯 𝗼𝗻 𝗔𝗪𝗦 𝗘𝗖𝟮 (𝗗𝗷𝗮𝗻𝗴𝗼 𝗔𝗽𝗽) If you're learning DevOps or building real-world deployment pipelines, this is exactly the kind of hands-on setup you need. I just published a complete, step-by-step guide where I implemented a declarative CI/CD pipeline using: ⚙️ Jenkins (Pipeline as Code using Groovy) ☁️ AWS EC2 (Ubuntu server setup) 🐙 GitHub (auto-trigger on push to main/dev) 🐍 Django (production-ready deployment with Gunicorn) 💡 𝗪𝗵𝗮𝘁 𝘆𝗼𝘂’𝗹𝗹 𝗹𝗲𝗮𝗿𝗻: ✅ Setting up Jenkins securely on EC2 ✅ Integrating GitHub with Personal Access Tokens ✅ Writing a complete Jenkinsfile (Declarative Pipeline) ✅ Automating build + dependency installation ✅ Running Django app using Gunicorn inside pipeline ✅ Triggering CI/CD on every code push 🔥 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀: This is not just theory — it's a production-style pipeline that reflects how modern engineering teams automate deployments. You’ll understand: Pipeline stages (Clone → Build → Deploy) Infrastructure + Application integration Real DevOps workflow from scratch 📖 𝗙𝘂𝗹𝗹 𝗕𝗹𝗼𝗴: https://lnkd.in/ds2cnU86 💬 If you're working on DevOps, MLOps, or backend systems — this will be super useful. #DevOps #Jenkins #AWS #GitHub #CICD #Django #CloudComputing #SoftwareEngineering #Automation #Python
To view or add a comment, sign in
-
-
🚀 Excited to share my recent DevOps project! I successfully built and deployed a Django application using a complete CI/CD pipeline with Jenkins Multi-Agent architecture. 🔧 Technologies used: • Jenkins (Master–Agent setup) • Docker & Docker Compose • Nginx (Reverse Proxy) • MySQL • AWS EC2 • GitHub Webhooks 📌 Workflow: GitHub → Webhook → Jenkins → Build on Jenkins Agent → Docker Build → Docker Compose → Deploy on AWS I implemented a Jenkins multi-agent setup, where the Jenkins master manages the pipeline while the agent node executes the build and deployment tasks. This improves scalability and distributes workloads efficiently. Every time new code is pushed to GitHub, Jenkins automatically triggers the pipeline, builds Docker containers, and deploys the application. This project helped me gain hands-on experience with CI/CD automation, containerization, distributed builds, and real-world DevOps workflows. GIT Repo: https://lnkd.in/dpj_dk-3 Always learning and exploring more in DevOps & Cloud 🚀 #DevOps #Jenkins #Docker #AWS #Django #CICD #CloudComputing #Learning
To view or add a comment, sign in
-
-
Just leveled up my CI/CD pipeline with SonarCloud integration — and it completely changed how I think about code quality. 🔍 Why SonarCloud matters (Quality Gate mindset) Before this, my pipeline only checked if the code runs. Now it checks if the code is actually production-ready. With SonarCloud: - ❌ Bugs are caught before deployment - 🔐 Security vulnerabilities are flagged early - 📊 Code coverage is enforced - 🚫 Bad code gets blocked automatically using Quality Gates 👉 It’s not just CI/CD anymore — it’s CI/CD with standards. --- ⚙️ How I integrated it into my pipeline I built a complete DevOps flow for my Flask app: 1. Push code to GitHub 2. Pipeline triggers automatically (GitHub Actions) 3. Install dependencies + run tests with coverage 4. SonarCloud performs: - Code analysis - Security scan - Quality Gate validation 5. If ✅ PASS → - Build with Docker - Deploy using Kubernetes - Serve via NGINX on AWS EC2 6. If ❌ FAIL → Deployment is blocked until issues are fixed --- 📈 What improved after integration Before: - Code deployed even with hidden bugs - No visibility into security issues - No test coverage tracking Now: - 🔥 Quality Gate ensures only clean code reaches production - 🛡️ Security issues are caught early (shift-left security) - 📊 Test coverage is measurable and enforced - ⚡ CI/CD pipeline is more reliable and production-grade --- 💡 Biggest realization: > “A working pipeline is not enough. A quality-enforcing pipeline is what makes you a real DevOps engineer.” --- This project helped me move from just deploying apps → to building industry-level CI/CD pipelines. #DevOps #SonarCloud #CICD #Docker #Kubernetes #AWS #NGINX #Python #Flask #CloudEngineerin
To view or add a comment, sign in
-
-
As a DevOps Engineer we spend more time on creating and debugging CI/CD pipelines than building actual systems. So I built an AI agent that does it for me. You point it at any GitHub repository. It reads the actual code, not a template, not a guess, and generates a complete production-grade CI/CD pipeline tailored to that specific stack. It validates every pipeline against 20+ security rules before touching your repo. Then it opens a PR and waits for your approval. It never commits anything without a human saying yes. When that pipeline fails, you give it the run ID. It downloads the full logs, pulls out the exact failure — CVEs with package names and fix versions, compile errors with file and line number, missing secrets, Docker auth failures — and tells you precisely what broke and how to fix it. The stack I built to make this work: → LangGraph with two separate graphs, one for creating pipelines and one for diagnosing failures → Gemini 2.5 Flash with ChromaDB RAG, retrieving pipeline standards and security rules semantically at generation time → A custom GitHub MCP server built on FastAPI and deployed on Cloud Run, handling every GitHub operation the agent needs → A deterministic enforcer layer that post-processes every LLM output. Because you cannot trust an AI to never skip a security gate. → Human approval gate backed by GCS so state survives across stateless Cloud Run instances → Workload Identity Federation throughout. No service account keys stored anywhere. Works across Java, Kotlin, Node.js, React, Python, Go and .NET. Detects Helm charts, Terraform, E2E tests and generates the right pipeline for each automatically. #GenerativeAI #DevOps #PlatformEngineering #RAG #MCPServer #LangGraph
To view or add a comment, sign in
-
-
Beyond the Code: Architecting a Hybrid-Cloud DevSecOps Pipeline I’m thrilled to share that I have successfully deployed my latest project—a professional Python microservice—live on an AWS EC2 instance using a custom, hybrid CI/CD architecture! Most projects stop at "it works on my machine." I wanted to build something that reflects real-world enterprise standards. This project wasn't just about writing Python; it was about orchestrating a secure, automated path from the first line of code to a live production server. The Technical Core Application: A high-performance FastAPI microservice with a modern, responsive dashboard styled via Tailwind CSS. The CI Layer (GitHub): Automated unit testing and linting using GitHub Actions to ensure every Pull Request is production-ready. The "Enterprise" Layer (GitLab): I configured a Self-Hosted GitLab Runner on an AWS EC2 instance to handle deep security analysis and Docker builds. Security & Quality: Integrated SonarQube as a mandatory Quality Gate, ensuring zero vulnerabilities and high code coverage before deployment. The AWS Deployment The final stage of the pipeline uses automated SSH-based deployment to manage a containerized environment on AWS. By using Docker-in-Docker (DinD) and secure secret management, the application is seamlessly updated without manual intervention. Key Lessons Learned: Self-Hosted Infrastructure: Configuring my own GitLab Runner on EC2 provided deep insights into Linux administration, Docker executors, and cloud networking. DevSecOps Integration: Security isn't a final step; it’s a constant. SonarQube taught me how to catch technical debt before it becomes a problem. Hybrid Orchestration: Learning to bridge GitHub and GitLab showed me how to design flexible, tool-agnostic workflows. A huge thank you to the community for the guidance during this build! Check out the live code and the full architecture on GitHub:https://lnkd.in/eGYU99bq #DevOps #CloudEngineering #AWS #Python #FastAPI #GitLab #GitHubActions #SonarQube #Docker #SoftwareEngineering #TechNigeria #DevSecOps #CloudComputing2026 #PythonDevelopment #DevOpsProject
To view or add a comment, sign in
-
Built and published my SimpleTimeService – End-to-End DevOps Challenge 🚀 This project is more than a simple web app — it’s a complete DevOps workflow built to simulate production-style delivery. What’s included: 🔹 Minimal Python web application (FastAPI) 🔹 Secure Docker containerization (non-root, read-only filesystem) 🔹 Kubernetes deployment with probes, limits, and service exposure 🔹 Infrastructure provisioning with Terraform (AWS VPC + EKS) 🔹 CI/CD automation using GitHub Actions 🔹 Security scanning using Trivy 🔹 Automated Kubernetes manifest updates with immutable image tags Tech stack used: 🐳 Docker ☸️ Kubernetes (EKS) 🏗 Terraform ⚙️ GitHub Actions ☁️ AWS 🐍 FastAPI Pipeline flow: Code Push → Lint → Build → Test → Security Scan → Push Image → Update Manifest → Deploy Production practices implemented: ✅ Non-root container execution ✅ Read-only filesystem ✅ Resource requests & limits ✅ Liveness & readiness probes ✅ Vulnerability scanning ✅ Immutable image versioning ✅ Infrastructure as Code GitHub repo live now 💻 https://lnkd.in/gANtZR8V #DevOps #AWS #Kubernetes #Terraform #Docker #GitHubActions #EKS #CloudNative #PlatformEngineering #DevSecOps
To view or add a comment, sign in
-
-
🚨 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 — 𝗠𝗨𝗦𝗧 𝗞𝗡𝗢𝗪 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝘀 ⸻ 💥 Still confused about Kubernetes? Let me simplify it 👇 ⸻ 🧠 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 = 👉 Runs + Scales + Manages containers automatically ⸻ ⚡ 𝗧𝗼𝗽 𝟭𝟬 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝘀: 1️⃣ 𝗣𝗼𝗱 → Smallest unit (contains containers) 2️⃣ 𝗡𝗼𝗱𝗲 & 𝗖𝗹𝘂𝘀𝘁𝗲𝗿 → Node = machine → Cluster = group of machines 3️⃣ 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 🔥 → Manages Pods → Scaling + Updates + Rollbacks 4️⃣ 𝗦𝗲𝗿𝘃𝗶𝗰𝗲 → Connects users to Pods → ClusterIP | NodePort | LoadBalancer 5️⃣ 𝗦𝗰𝗮𝗹𝗶𝗻𝗴 → Manual or Auto (HPA) 6️⃣ 𝗦𝗲𝗹𝗳-𝗛𝗲𝗮𝗹𝗶𝗻𝗴 🤯 → Auto restart → Auto recreate Pods 7️⃣ 𝗖𝗼𝗻𝗳𝗶𝗴𝗠𝗮𝗽 & 𝗦𝗲𝗰𝗿𝗲𝘁 → External configs + secure data 8️⃣ 𝗜𝗻𝗴𝗿𝗲𝘀𝘀 → Expose app to internet → Routing + TLS 9️⃣ 𝗗𝗼𝗰𝗸𝗲𝗿 𝘃𝘀 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 → Docker = Run containers → Kubernetes = Manage at scale ⸻ 🧩 𝗢𝗻𝗲-𝗟𝗶𝗻𝗲 𝗙𝗹𝗼𝘄 (𝗠𝗲𝗺𝗼𝗿𝗶𝘇𝗲 𝗧𝗵𝗶𝘀 👇) 👉 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 → 𝗣𝗼𝗱𝘀 → 𝗦𝗲𝗿𝘃𝗶𝗰𝗲 → 𝗨𝘀𝗲𝗿𝘀 ⸻ 💡 𝗥𝗲𝗮𝗹𝗶𝘁𝘆: If you know Kubernetes… 👉 You are already ahead of 70% developers 🚀 ⸻ 📢 Want step-by-step guidance? 💬 Comment “𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀” ⸻ 👉 Follow: Narendra Sahoo 📺 Subscribe & stay tuned (YouTube coming 🔥 https://lnkd.in/gJkDK2tK) ⸻ #Kubernetes #DevOps #Docker #Java #Microservices #Cloud #SoftwareEngineering 🚀
To view or add a comment, sign in
-
-
🚀 From Code to Production — A Real-World DevOps Story Ever wondered what actually happens after a developer pushes code? Here’s a simple story from my daily work 👇 👨💻 A developer pushes code to GitHub ⬇️ ⚙️ GitHub Actions kicks off automatically Maven builds the application Tests run (quality checks ✅) Docker image gets created ⬇️ 📦 The image is pushed to AWS ECR (our secure registry) ⬇️ ☸️ Deployment begins in EKS (Kubernetes) Kubernetes detects new image version Scheduler decides where to run pods EC2 worker nodes pull the image from ECR Kubelet starts containers ⬇️ 🔄 Rolling update happens New pods come up Old pods are gradually removed Zero downtime 🚀 ⬇️ 🌐 Traffic is shifted to new version seamlessly 💡 The beauty of this flow? No manual intervention Fully automated Scalable & resilient Production-ready deployments in minutes This is what modern backend + DevOps looks like — not just writing code, but owning the full lifecycle. #DevOps #Java #SpringBoot #Kubernetes #AWS #EKS #Docker #GitHubActions #Microservices
To view or add a comment, sign in
-
-
🚀 Built & Deployed an AI Bank Application on Kubernetes (Kind) — Full DevOps Hands-on Over the past few days, I worked on deploying a real-world AI-powered Bank Application on Kubernetes using a local Kind cluster — and this turned out to be more about debugging and learning than just writing YAML files. 🔧 What I implemented: Kubernetes cluster setup using Kind (multi-node) Namespace-based isolation (bankapp) MySQL deployment with ConfigMaps & Secrets Persistent storage using PV & PVC Backend application deployment (Spring Boot) Service configuration for internal communication 💥 Challenges I faced (and solved): ❌ Pods crashing randomly → Root cause: Application failing due to DB connection timing & auth issues ❌ MySQL “Access denied” error → Learned that environment variables don’t update credentials after first initialization when using persistent volumes ❌ Persistent Volume confusion across nodes → Understood ReadWriteOnce behavior and why storage binds to a single node 🧠 Key Learnings: ✔ Kubernetes is NOT just about YAML — debugging is everything ✔ Logs (kubectl logs) are the most powerful tool ✔ Stateful apps behave very differently with persistent storage ✔ Small mistakes (like base64 encoding or labels) can break entire deployments ✔ Real DevOps = understanding system behavior, not just commands 📂 Project Highlights: Multi-pod deployment (App + MySQL) Persistent storage integration Real-world debugging scenarios Clean Kubernetes architecture 📖 I’ve also written a detailed step-by-step blog covering the entire journey, commands, errors, and fixes: 👉 https://lnkd.in/dp_7jPVX 🔗 GitHub Repo: https://lnkd.in/dz2Stnwg #Kubernetes #DevOps #Docker #SpringBoot #CloudComputing #LearningInPublic #100DaysOfDevOps #BackendDevelopment #OpenSource
To view or add a comment, sign in
Explore related topics
- How to Implement CI/CD for AWS Cloud Projects
- CI/CD Pipeline Optimization
- DevOps for Cloud Applications
- Kubernetes Deployment Skills for DevOps Engineers
- Docker Container Management
- How to Automate Kubernetes Stack Deployment
- Streamlined CI/CD Setup for AWS
- Automated Deployment Pipelines
- Kubernetes Deployment Tactics
- Jenkins and Kubernetes Deployment Use Cases
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
As a next step try implementing cache for docker builds so that when you run the same pipeline again the images should be taken from cache it should not build from scratch and also add some SCA, Image scanning tools which will help you learn advance tools of devops