Just built a fully automated CI/CD pipeline from scratch - no clicks, no manual deploys. 🚀 Every push to main now: ✅ Runs pytest automatically ✅ Builds a Docker image ✅ Pushes to Docker Hub (tagged with commit SHA for traceability) ✅ Deploys to the cloud via webhook Broken code never reaches production - the deploy job is gated behind the test job, so if tests fail, nothing ships. Stack: FastAPI · Docker · GitHub Actions · Docker Hub · Render The part that surprised me most was how much there is to configure across multiple platforms - GitHub secrets, Docker access tokens, Render webhooks, CORS - before it all clicks into place and just works. Live Endpoint: https://lnkd.in/egqPR-it GitHub: https://lnkd.in/eq-bTeKr #Python #Docker #DevOps #GitHub #GitHubActions #CI #CD #SoftwareEngineering #100DaysOfCode
Automated CI/CD Pipeline Built with FastAPI and GitHub Actions
More Relevant Posts
-
🚀 Excited to share my latest project: Automated AWS ECS Deployment using Python & CI/CD! I built a Python-based automation script using Boto3 to trigger and manage deployments on AWS ECS, fully integrated within a Jenkins CI/CD pipeline. 🔧 Key Highlights: • Automated ECS service deployment using Python (Boto3) • Integrated deployment step within Jenkins pipeline • Used Docker & AWS ECR for container management • Debugged real-world issues like environment setup, path errors, and credential handling • Improved deployment reliability by removing manual steps 💡 This project helped me understand how real-world CI/CD systems handle deployment automation and infrastructure interaction. 📂 GitHub Repo: https://lnkd.in/g5sCeaQk I’m currently exploring GitLab CI/CD and Terraform to further strengthen my DevOps skills 🚀 #DevOps #AWS #Jenkins #Docker #Python #Boto3 #CICD #CloudComputing #Automation
To view or add a comment, sign in
-
-
My pipeline encountered a failure before it even began. The code was correct, and the YAML was configured properly, but I overlooked something entirely different. I developed a CI/CD quality gate for LevelUp Bank, which automatically blocks any pull request to the main branch if the README.md or .gitignore files are missing. Each merge generates a structured JSON audit log sent directly to AWS CloudWatch, organized into beta and prod log groups. Unit tests are executed first, ensuring that nothing is logged until the tool itself is verified. However, when I triggered the beta workflow for the first time, it failed immediately due to a single line of error: the beta environment did not exist in the repository settings. It wasn't broken code or a misconfigured secret; it was simply a settings page I had never accessed. After navigating to Settings, then Environments, I created the beta and prod environments and re-ran the workflow, which passed in seconds. This experience taught me an important lesson: the best automation fails without the necessary environment in place. It's crucial to build the code and then verify everything the code requires to function correctly. These are two distinct checklists, and I had only completed one. The full code and setup guide is available on GitHub; the link is in the first comment. What is your best "it was not even the code" moment? Share below. #DevOps #GitHubActions #AWS #CloudWatch #PlatformEngineering #CICD #Python #LearningInPublic #CloudEngineering #SoftwareEngineering #LevelUpInTech #TechCommunity
To view or add a comment, sign in
-
-
Just completed a hands-on CI/CD project where I built a Python log analyzer and automated the full deployment workflow using GitHub Actions, Docker, Docker Hub, and AWS EC2. What stood out to me most about CI/CD is how it shifts teams from reactive debugging to proactive quality control. Before this project, I understood CI/CD conceptually. After building it, I saw how valuable it is when: ✅ Linting catches formatting issues early ✅ Unit tests prevent broken logic from being deployed ✅ Docker ensures consistency across environments ✅ Automated deployment removes repetitive manual work I also learned that real DevOps work is often debugging small issues that break pipelines: missing requirements.txt import errors GitHub workflow issues Docker deployment problems CI/CD isn’t just automation—it builds confidence that your code can move safely from development to production. Big thanks to @CoderCo for helping make these concepts practical through hands-on learning. #DevOps #CICD #GitHubActions #Docker #AWS #Python #CloudComputing #Automation
To view or add a comment, sign in
-
The playbook said "FAILED." That was it. No explanation. No error message. Just "FAILED." 🔧 Happy Ansible Tuesday! I had a playbook that ran a custom module against a fleet of servers. On most hosts it worked fine. On three hosts it just said "FAILED" with no useful output. The msg field was empty. The stderr field was empty. The task just died and moved on. I spent way too long staring at the playbook logic, checking inventory variables, and comparing the failing hosts to the working ones. Everything looked identical. Same OS. Same Python version. Same module code. No reason for three hosts to fail and the rest to succeed. Then I added three letters: ansible-playbook site.yml -vvv The raw module output appeared. On the failing hosts, the module was throwing a Python traceback that Ansible's default output was swallowing. A missing Python dependency on those three hosts was causing an ImportError. The module crashed before it could format a proper error response, so Ansible had nothing to display except "FAILED." Three v's. That's all it took. The default output hid the problem. The verbose output showed me the exact exception, the exact line, and the exact missing package. Five minutes to fix after that. The Danger Zone (When Default Output Hides the Problem): 🔹 Ansible's default verbosity shows you what happened (pass/fail) but not always why. If a module crashes before it can return structured output, you get "FAILED" with no context. 🔹 -v adds task results. -vv adds input parameters. -vvv adds connection details and raw module output. -vvvv adds the full SSH/connection debug. Start with -vvv for most debugging. 🔹 If three hosts fail and 200 succeed with the same playbook, the problem is almost never the playbook. It's the host environment. -vvv shows you what the host gave back, not just what Ansible tried to do. ❓ Question of the Day: Which flag increases the verbosity of Ansible output to help debug connection issues? Ⓐ -d Ⓑ --debug Ⓒ --trace Ⓓ -vvv 👇 Answer and breakdown in the comments! #Ansible #NetworkAutomation #DevOps #DamnitRay #QOTD
To view or add a comment, sign in
-
-
Just wrapped up a full end-to-end DevOps project Built to mirror how modern production systems actually run. Here’s what went into the project: I containerized Go, Python, and Java microservices with Docker, then deployed them both locally and on AWS EKS the entire infrastructure provisioned from scratch using modular Terraform. Inside Kubernetes, I set up an Ingress Controller to handle external traffic intelligently, enabling clean host-based and path-based routing across services. For the pipeline: ⚙️ GitHub Actions handles builds and unit tests on every commit 🔄 Argo CD drives GitOps based deployments Git is the single source of truth Every change flows automatically: code → tested image → deployed on EKS No manual steps. No drift between environments. Just consistent, repeatable delivery. This project pushed me deeper into: #Docker #Kubernetes #EKS #Terraform #GitHubActions #ArgoCD #Microservices #GitOps
To view or add a comment, sign in
-
-
Excited to share my latest hands-on project in my DevOps journey! I recently completed a Docker assignment where I successfully containerized a real application using Python and Flask. 🔹 What I did in this project: Built a simple web application using Flask Created a Dockerfile to containerize the app Built and ran Docker containers locally Exposed the application using port mapping Pushed the Docker image to Docker Hub Managed images and containers using Docker CLI 🔹 Key concepts I learned: How Docker images and containers work Writing efficient Dockerfiles Docker networking and port mapping Importance of containerization in modern DevOps Why this matters: Containerization helps developers package applications with all dependencies, ensuring consistency across development, testing, and production environments. 🔗 GitHub Repository: https://lnkd.in/dPH5SCme 🔗 Docker Hub Image: https://lnkd.in/dC6-_uGY This is just the beginning — more DevOps projects coming soon 🚀 #Docker #DevOps #Learning #Cloud #GitHub #BeginnerToPro
To view or add a comment, sign in
-
CI/CD sounds fancy. It's actually just a robot that checks your code before it ships. Let me break it down: 𝐂𝐈 = 𝐂𝐨𝐧𝐭𝐢𝐧𝐮𝐨𝐮𝐬 𝐈𝐧𝐭𝐞𝐠𝐫𝐚𝐭𝐢𝐨𝐧 Every time you push code → automated tests run → you know instantly if something broke. 𝐂𝐃 = 𝐂𝐨𝐧𝐭𝐢𝐧𝐮𝐨𝐮𝐬 𝐃𝐞𝐥𝐢𝐯𝐞𝐫𝐲/𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 If tests pass → code automatically deploys to your server. No manual uploads, no FTP nightmares. A real example from my projects: 1. I push a branch 2. GitHub Actions runs my tests (2 mins) 3. If green → it merges and deploys 4. If red → I fix before it ever touches production Why does this matter for CS students? → Companies use CI/CD everywhere → Having it on your GitHub projects signals seniority → It saves you from embarrassing bugs in demos or interviews You can set up a basic GitHub Actions pipeline in under 30 minutes. I'll share mine next week. Are you using any CI/CD tools in your personal projects right now? #CICD #DevOps #GitHubActions #CSStudents #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 How I Reduced My Docker Image Size (and Why It Changed My Workflow) When I first started working with Docker, my only goal was simple — 👉 “Make the application run successfully.” I didn’t really think about image size… until I started noticing real problems: ❌ Slow build times ❌ Large images (sometimes 800MB+) ❌ Delays in pushing images and deployments That’s when I realized — Docker image size is not just a number, it impacts everything. So I started exploring and improving step by step 👇 🔹 Switched to minimal base images like python:3.11-slim and openjdk:17-jdk-slim 🔹 Learned and applied multi-stage builds (game changer 🔥) 🔹 Removed unnecessary dependencies 🔹 Used .dockerignore to clean build context 🔹 Avoided caching using --no-cache-dir 🔹 Cleaned up temp files and package caches 💡 What made it more interesting? I didn’t just apply this in one stack — 👉 I worked on Python-based applications and optimized the image using lightweight base + no cache 👉 I also built and optimized a Spring Boot Docker image, where I used multi-stage build to keep only the final JAR file in the production image That experience really helped me understand how different stacks can be optimized using the same DevOps principles. 🎯 The Result? ✔ Faster builds ✔ Faster deployments ✔ Reduced image size significantly ✔ Cleaner and more production-ready setup 💡 This might look like a small optimization, but in real-world systems — it makes a big difference in performance, cost, and scalability. I’m currently exploring more in DevOps and system design, and I’m excited to keep learning, improving, and sharing my journey with you all 🚀 #DevOps #Docker #SpringBoot #Python #AWS #Cloud #LearningInPublic #SoftwareEngineering #SoumyajitParamanick
To view or add a comment, sign in
-
-
🚀 Just built a full end-to-end DevOps pipeline I took the classic Docker voting app (Python, .NET, Node.js, Redis, Postgres) and wired it up with a production-grade CI/CD workflow using Azure DevOps + GitOps principles. 🔧 CI — Azure Pipelines • Migrated source code from GitHub → Azure Repos • Set up self-hosted Windows agents on Local • Automated container image builds & pushes to Azure Container Registry (ACR) 🔁 CD — GitOps with Argo CD • Deployed to Azure Kubernetes Service (AKS) • CI pipeline updates Kubernetes manifests via shell script • Argo CD detects drift and reconciles cluster state automatically ☸️ Infrastructure • Provisioned an AKS cluster from scratch • Configured image pull secrets for private ACR access • Installed and configured Argo CD end-to-end 💻 Repo: https://lnkd.in/g6qbTc_3 #DevOps #Kubernetes #AzureDevOps #GitOps #ArgoCd #CI_CD #Docker #CloudEngineering #AKS
To view or add a comment, sign in
-
🚀 Built k8s-debug – a lightweight CLI to reduce Kubernetes pod debugging time In large-scale microservices architectures, where hundreds of pods run across namespaces, identifying failing vs healthy pods quickly becomes noisy and time-consuming. Debugging often turns into repetitive kubectl commands, scattered logs, and delayed root cause identification — especially during incidents. To simplify this, I built k8s-debug 👇 What it helps surface quickly: • CrashLoopBackOff patterns • Failed / unhealthy pods across services • Termination reasons • Namespace-level pod health overview Instead of manually filtering and scanning outputs, this gives a clear, aggregated view of failing vs running pods in one place. Goal: Reduce time-to-diagnosis (MTTR) and improve observability during high-pressure scenarios. ⚡ Install: pip install k8s-debug-tool Run: k8s-debug --namespace <your-namespace> 📦 PyPI: https://lnkd.in/gNn44_Rs 💻 GitHub: https://lnkd.in/giYkPH_6 This is an early version, currently tested in a Kubernetes test environment. Planning to extend it with deeper diagnostics and real-time insights. Would appreciate feedback from engineers managing large-scale Kubernetes workloads #DevOps #Kubernetes #SRE #Microservices #CloudEngineering #Python #OpenSource #Automation
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development