🚀 Built a fully automated CI/CD pipeline — from code to cloud in under 3 minutes. I've been on a self-learning journey into DevOps, and this week I wanted to stop reading about CI/CD and actually build one. Here's what happens the moment I run git push: ✅ GitHub Actions spins up a fresh Ubuntu VM ✅ 4 automated pytest tests run — if any fail, everything stops ✅ A Docker image is built and pushed to Docker Hub (tagged with the commit SHA for traceability) ✅ The container is deployed live to AWS EC2 via SSH ✅ The old version is replaced with zero manual steps Total time: ~2 minutes. Human involvement after the push: zero. What I learned building this: → The needs: keyword in GitHub Actions creates a real safety chain — broken code physically cannot reach production → Secrets management isn't optional — credentials belong in encrypted GitHub Secrets, never in code → Docker layer caching (copying requirements.txt before your app code) can cut build time significantly → Tagging images with github.sha means every deployment is traceable to its exact commit — rollback in under 60 seconds Tech stack: Python · Flask · Docker · GitHub Actions · AWS EC2 · pytest Live API: http://13.61.177.113:5000 The biggest lesson? Automation isn't about replacing effort — it's about making effort reliable. #DevOps #CICD #Docker #AWS #GitHubActions #CloudComputing #LearningInPublic #Python
More Relevant Posts
-
#DevOpsLearningJourney 🚀 Built my first end-to-end DevOps CI/CD pipeline today! I went beyond just writing code and actually automated the entire delivery process of an application — from a GitHub push to a live server. 🔧 What I built: A Flask-based Todo app Dockerized the application Set up Jenkins on an AWS EC2 instance Created a CI/CD pipeline using a Jenkinsfile Pushed Docker images to Docker Hub Automatically deployed the app on EC2 ⚙️ Pipeline flow: GitHub → Jenkins → Docker → Docker Hub → EC2 → Live app 💡 Key learnings: Writing a Jenkinsfile from scratch Handling real pipeline issues (credentials, permissions, Docker auth) Understanding CI/CD beyond theory Debugging build failures step by step Managing AWS security groups and ports 🌐 Live app: http://16.170.218.17:5000 📦 Docker image: https://lnkd.in/gZK8f27b 📂 GitHub repo: https://lnkd.in/gdr9d9Gu This project made one thing very clear: 👉 DevOps is not about tools — it’s about automation, reliability, and repeatability. Next step: Automating deployments with GitHub webhooks 🔥 #DevOps #Jenkins #Docker #AWS #CI_CD #Python #LearningInPublic #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Learning Update | Docker & DevOps Fundamentals Here’s what I worked on recently: 🔹 Docker Concepts Studied core Docker concepts including: • Dockerfile • Image layers & caching • Best practices for efficient builds 🔹 Hands-on Implementation Created a multi-stage Dockerfile for a Node.js application to improve build efficiency. 🔹 Optimization Reduced image size using: • .dockerignore • Slim base images • Layer caching techniques ⚡ 🔹 Docker Compose Setup Built a setup with: • Node.js service • PostgreSQL service 🔹 Testing & Configuration • Verified services build, run, and communicate correctly • Configured environment variables, volume mounts, and health checks 🔹 Code Sharing Pushed Dockerfile and docker-compose.yml to GitHub for reference and reuse. Strengthening my DevOps fundamentals step by step. #Docker #DevOps #NodeJS #PostgreSQL #LearningInPublic #GrowthMindset
To view or add a comment, sign in
-
🗓️ Day 35/100 — 100 Days of AWS & DevOps Challenge Containerization chapter begins. Today: installing Docker CE and Docker Compose on the app server. Simple task on the surface — but worth explaining what's actually being installed, because it's not just one thing. Below are the commands to install Docker: $ sudo yum-config-manager --add-repo \ https://lnkd.in/gVPqThME $ sudo yum install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin $ sudo systemctl start docker $ sudo systemctl enable docker $ sudo docker run hello-world #Testing for verification The Docker stack has three layers: docker-ce is the daemon — the background process that manages everything. docker-ce-cli is the command-line client that talks to it. containerd.io is the actual container runtime that creates and manages containers at the OS level. When you run docker run nginx, the CLI talks to the daemon, which talks to containerd, which uses runc to create the container. Three components working together. docker-compose-plugin vs the old docker-compose: The modern Compose is a Docker CLI plugin — invoked as docker compose (no hyphen). The old docker-compose with a hyphen was a separate Python binary and is now deprecated. If you see pipelines or docs using docker-compose, they're using legacy tooling. The modern version is faster, actively maintained, and ships as part of Docker's plugin architecture. Full Docker architecture breakdown + Q&A on GitHub 👇 https://lnkd.in/gKhHi-K6 #DevOps #Docker #Containers #Linux #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #Kubernetes #Containerization #CICD
To view or add a comment, sign in
-
Excited to share my latest hands-on project in my DevOps journey! I recently completed a Docker assignment where I successfully containerized a real application using Python and Flask. 🔹 What I did in this project: Built a simple web application using Flask Created a Dockerfile to containerize the app Built and ran Docker containers locally Exposed the application using port mapping Pushed the Docker image to Docker Hub Managed images and containers using Docker CLI 🔹 Key concepts I learned: How Docker images and containers work Writing efficient Dockerfiles Docker networking and port mapping Importance of containerization in modern DevOps Why this matters: Containerization helps developers package applications with all dependencies, ensuring consistency across development, testing, and production environments. 🔗 GitHub Repository: https://lnkd.in/dPH5SCme 🔗 Docker Hub Image: https://lnkd.in/dC6-_uGY This is just the beginning — more DevOps projects coming soon 🚀 #Docker #DevOps #Learning #Cloud #GitHub #BeginnerToPro
To view or add a comment, sign in
-
Beyond the Code: Architecting a Hybrid-Cloud DevSecOps Pipeline I’m thrilled to share that I have successfully deployed my latest project—a professional Python microservice—live on an AWS EC2 instance using a custom, hybrid CI/CD architecture! Most projects stop at "it works on my machine." I wanted to build something that reflects real-world enterprise standards. This project wasn't just about writing Python; it was about orchestrating a secure, automated path from the first line of code to a live production server. The Technical Core Application: A high-performance FastAPI microservice with a modern, responsive dashboard styled via Tailwind CSS. The CI Layer (GitHub): Automated unit testing and linting using GitHub Actions to ensure every Pull Request is production-ready. The "Enterprise" Layer (GitLab): I configured a Self-Hosted GitLab Runner on an AWS EC2 instance to handle deep security analysis and Docker builds. Security & Quality: Integrated SonarQube as a mandatory Quality Gate, ensuring zero vulnerabilities and high code coverage before deployment. The AWS Deployment The final stage of the pipeline uses automated SSH-based deployment to manage a containerized environment on AWS. By using Docker-in-Docker (DinD) and secure secret management, the application is seamlessly updated without manual intervention. Key Lessons Learned: Self-Hosted Infrastructure: Configuring my own GitLab Runner on EC2 provided deep insights into Linux administration, Docker executors, and cloud networking. DevSecOps Integration: Security isn't a final step; it’s a constant. SonarQube taught me how to catch technical debt before it becomes a problem. Hybrid Orchestration: Learning to bridge GitHub and GitLab showed me how to design flexible, tool-agnostic workflows. A huge thank you to the community for the guidance during this build! Check out the live code and the full architecture on GitHub:https://lnkd.in/eGYU99bq #DevOps #CloudEngineering #AWS #Python #FastAPI #GitLab #GitHubActions #SonarQube #Docker #SoftwareEngineering #TechNigeria #DevSecOps #CloudComputing2026 #PythonDevelopment #DevOpsProject
To view or add a comment, sign in
-
🗓️ Day 38/100 — 100 Days of AWS & DevOps Challenge Today: pull an image, give it a new tag. Two commands. $ sudo docker pull busybox:musl $ sudo docker tag busybox:musl busybox:blog Same Image ID. That's the detail worth understanding. docker tag doesn't copy anything. It creates a new pointer to the same underlying image layers. Both busybox:musl and busybox:blog share the same 1.41MB of storage — tagging is free in terms of disk space. You can have 50 tags on the same image and it still only occupies the storage of one image. Why this matters in CI/CD pipelines: This is exactly how image promotion works in production. A build produces myapp:build-456. Tests pass. The pipeline re-tags it: $ docker tag myapp:build-456 myapp:staging $ docker tag myapp:build-456 myapp:latest No new image is created. No layers are duplicated. The same image — tested and verified — now carries multiple tags that represent its promotion status. When production needs a rollback: $ docker tag myapp:build-455 myapp:latest One command. The previous build is live again. Because tags are just labels. One more concept worth knowing: tags are mutable. busybox:latest today might be a different image tomorrow when the maintainer updates it. If you need to pin to a specific image permanently, use the digest: $ docker pull busybox@sha256:abc123def... A digest is immutable — it always refers to the exact same image layers regardless of when or where it's used. For production deployments, digests over tags. Full tagging guide + Q&A on GitHub 👇 https://lnkd.in/gvUtPawg #DevOps #Docker #Containers #CICD #Linux #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #ContainerRegistry #Kubernetes
To view or add a comment, sign in
-
🗓️ Day 28/100 — 100 Days of AWS & DevOps Challenge Today's task: a developer has in-progress work on a feature branch but one specific commit is ready and needs to go to master right now, without dragging the rest of the unfinished work along. This is exactly what git cherry-pick is for. # Find the commit hash on the feature branch $ git log feature --oneline # abc5678 Update info.txt ← this one # Switch to master and cherry-pick it $ git checkout master $ git cherry-pick abc5678 # Push $ git push origin master One commit. Surgically applied. Feature branch untouched. 1. Why not just merge the feature branch? - The feature branch has in-progress commits code that isn't tested, isn't ready, and would break things on master. git merge feature brings ALL of it over. Cherry-pick takes only what's ready. 2. When this pattern matters in production: - A critical bug fix lands on a development branch. You can't merge the whole branch, there are half-finished features alongside the fix. You cherry-pick the fix onto master and onto any active release branches. This is how security patches get backported across multiple versions in open source projects. Same concept, same tool. The command to find a commit by message when you don't have the hash handy: $ git log --all --oneline --grep="Update info.txt" Saves time when the branch has many commits and you're looking for one specific one. Full breakdown on GitHub 👇 https://lnkd.in/gVHV9qPc #DevOps #Git #VersionControl #CherryPick #GitOps #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #CICD #Hotfix
To view or add a comment, sign in
-
🚫Stop Watching Docker Tutorials🚫 You don’t learn Docker by watching someone else type commands. You learn it when things break… and you fix them. If you’re still stuck in tutorial mode, here’s your wake-up call: 🔥 Build This Instead: 1️⃣ Run Your First Container Stop overthinking. Run Nginx. Map ports. Hit localhost. See it live. 2️⃣ Build Your Own Image Write a Dockerfile from scratch. Break it. Fix it. Run your own app. 3️⃣ Make Data Survive Use volumes. Delete the container. If your data is gone, you did it wrong. 4️⃣ Connect Containers Like a Real System Create a network. App talks to DB. No localhost shortcuts. Real communication. 5️⃣ Go Multi-Container Write your first docker-compose.yml Spin up full stack in one command. 6️⃣ Push to the Real World Tag your image. Push to Docker Hub. Pull it somewhere else. If it works — you’re learning. 7️⃣ Optimize Like an Engineer Use multi-stage builds. Cut your image size down brutally. 8️⃣ Automate Everything Set up GitHub Actions. Push code → build image → ship it automatically. Watching tutorials feels productive but it isn’t. Building systems and gaining practical knowledge is. Do this once properly and you won’t need another Docker tutorial again. #Docker #DevOps #Cloud #Engineering #CICD
To view or add a comment, sign in
-
-
🚀 From Beginner to Advanced in Docker — Hands-On Journey Completed! 🐳 I recently completed a comprehensive Docker hands-on assignment series that covers everything from basics to real-world production practices — and honestly, this is what real DevOps learning should look like. 📘 Based on a structured guide with 8 practical assignments, full solutions, and deep explanations ⸻ 🔥 What I Learned (Real Skills, Not Just Theory) ✅ Docker Fundamentals • Difference between images vs containers • Running containers with port mapping & networking • Debugging real issues (permissions, port conflicts) ✅ Custom Image Building • Writing optimized Dockerfiles • Layer caching for faster builds • Building Node.js applications inside containers ✅ Data Persistence (Critical for Production) • Using Docker Volumes • Preventing data loss in containers • Running stateful apps like MySQL ✅ Container Networking • Creating custom networks • Service-to-service communication using DNS • Connecting apps like Node.js ↔ Redis ✅ Docker Compose (Real-world setup) • Multi-container architecture (Frontend + Backend + DB) • Service dependency & health checks • One command to run the entire stack ⸻ 🚀 Advanced Concepts That Actually Matter 💡 Docker Hub & Image Management • Tagging, pushing & pulling images • Why latest tag is dangerous in production 💡 Multi-Stage Builds (Game Changer) • Reduced image size from ~850MB → ~10MB • Smaller images = faster deployments + better security 💡 CI/CD with GitHub Actions • Automated Docker build & push pipelines • Secure secret management • Production-ready DevOps workflow ⸻ ⚡ Big Takeaways 👉 Docker is not just about running containers — it’s about building scalable, reproducible, production-ready systems 👉 The difference between beginner and pro = understanding “HOW IT WORKS” (not just commands) 👉 Real DevOps skill = Hands-on + Troubleshooting + Optimization + Automation ⸻ 🧠 My Honest Take Most people think they know Docker because they ran docker run nginx. That’s surface-level. If you don’t understand: • Layers • Volumes • Networking • CI/CD pipelines 👉 You’re not production-ready yet. ⸻ 📌 What’s Next? Moving deeper into: • Kubernetes 🔥 • Terraform 🌍 • Full DevSecOps pipelines ⚙️ ⸻ 💬 If you’re learning DevOps — stop watching tutorials endlessly Start building like this. ⸻ #Docker #DevOps #Cloud #Kubernetes #CI_CD #GitHubActions #Terraform #LearningInPublic #DevOpsJourney #Containerization
To view or add a comment, sign in
-
Docker Hub rate limits will quietly break your Kubernetes cluster. At 100 anonymous pulls per 6 hours, your cluster is a ticking time bomb. One bad deployment loop or a cluster-wide node upgrade, and suddenly your pods are stuck in ImagePullBackOff or throwing cryptic "unexpected EOF" errors. I spent hours debugging these cascading failures. Here's exactly how I solved it using Harbor as a proxy cache, completely automated with Kyverno and Terraform. ❌ The Problem: Manual Management Doesn't Scale Updating all manifests to point to your local registry breaks upstream compatibility and turns updates into a chore. 🔧 The Solution: The Invisible Rewrite Instead of touching application manifests, I use a mutating Kyverno webhook. My `mutate-image-to-harbor` policy automatically intercepts Pods and rewrites image paths: nginx:latest → https://lnkd.in/eVZxQzy9 A second Kyverno generation policy creates ExternalSecrets in every namespace, syncing pull credentials from Vault — completely invisible to Git. 🏗️ Terraform & Harbor: Hard Lessons 1️⃣ No Multiplexing Upstreams — strict 1:1 mapping required 2️⃣ Use Generic Adapter — `docker-registry` works reliably for all upstreams 3️⃣ No `proxy_cache = true` flag — set `registry_id` on the project instead 4️⃣ Authenticate the Proxy Target — unauthenticated Docker Hub still hits the 100/6h limit Zero rate limits, fast image pulls, unmodified upstream manifests. How are you handling image pulls in your clusters? 👇
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development