🚀 BUILDING IN PUBLIC | PART-2 | How I Built code quality gates and kubernetes in CI/CD Pipeline — And What It Taught Me A few months ago, the term "CI/CD pipeline" felt intimidating. Today, I can build one from scratch. Here's what I learned 👇 What is a CI/CD Pipeline? It's the backbone of modern software delivery — automating the journey of code from a developer's laptop to a live production environment, without manual intervention. The stages I learned to build: 🔹 Source — Code pushed to GitHub triggers everything. No push, no pipeline. 🔹 Build — The code gets compiled, dependencies installed, Docker image created. 🔹 Test — Automated tests run. If they fail, the pipeline stops. No broken code moves forward. 🔹 Deploy — The image gets pushed to a registry and deployed to the target environment — whether that's a cloud server or a Kubernetes cluster. Tools I got hands-on with: → Git & GitHub for version control → Docker for containerization → Jenkins / GitHub Actions for automation → Kubernetes for orchestration → Linux as the foundation for everything The biggest lesson? CI/CD isn't just a tool — it's a mindset. Ship small, ship fast, catch errors early. Every failed pipeline taught me more than a successful one ever did. 📌 This is Part 3 of my DevOps learning series. Part 2 is coming soon — Monitoring & Observability. I'll be covering Prometheus, Grafana, alerting, and how to actually know when your system is breaking before your users do. Follow along if you're on the same journey 🙌 Drop a comment — are you also learning DevOps? Let's connect! #DevOps #CICD #CloudComputing #Kubernetes #Docker #Linux #AWS #LearningInPublic #DevOpsEngineer #CloudEngineer
Building CI/CD Pipelines with Kubernetes and Docker
More Relevant Posts
-
Day 14 – Running Your First Docker Container Day 14 of my 30-Day DevOps learning journey. Today I focused on running my first Docker container and understanding how applications run inside containers. What is a Docker Container? A Docker container is a lightweight, portable environment that includes: Application code Runtime System libraries Dependencies This ensures the application runs the same on any system. Steps to Run Your First Container 1. Pull an Image from Docker Hub Docker Hub is the public registry where Docker images are stored. docker pull nginx 2. Run the Container docker run -d -p 80:80 nginx Explanation: -d → Run container in background -p 80:80 → Map container port to host port Now the application runs inside a container. Check Running Containers docker ps This command lists all running containers. Stop a Container docker stop <container-id> Why Containers Matter in DevOps • Faster deployments • Consistent environments • Easy scaling • Works perfectly with CI/CD pipelines Containers make it easier to move applications from development → testing → production without compatibility issues. Tomorrow: Docker Images & Dockerfile – How containers are built. Do follow me for more content on DevOps. Please checkout my GitHub Repo and give your suggestion i have created basic projects - https://lnkd.in/gXTYxXXm A Special thanks to Shubham Londhe & Abhishek Veeramalla for the guidance and the tutorials. #DevOps #Docker #Containers #CICD #CloudComputing #AWS #Jenkins #Kubernetes #Linux #DevOpsEngineer #TechLearning
To view or add a comment, sign in
-
-
🚀 From 20-minute manual deploys to 2-minute automation: My Week 4 DevOps Journey Four weeks ago, I was manually SSH'ing into servers, running Docker commands, and hoping nothing would break. Today, I just push code and watch it deploy automatically to production. This is the power of CI/CD, and I built it from scratch. What I shipped: A complete CI/CD pipeline that automatically builds, tests, and deploys my portfolio site to AWS with every commit. The stack: GitHub Actions (orchestration) Docker Buildx (optimized builds) AWS EC2 (production server) Bash + SSH (deployment automation) The Real Impact: ⚡ Before: 15-20 minutes per deployment ⚡ After: 2 minutes fully automated ⚡ Build time: 70% faster with caching ⚡ Manual steps: Zero But Here's What I'm Most Proud Of: I didn't just follow tutorials. I built everything manually first, hit every wall, and debugged every error. When variables didn't expand in SSH context? I learned about execution contexts and heredoc. When cache didn't work? I dove deep into Docker layers and understood Buildx. The Approach That Made the Difference: I chose to write everything in bash instead of using pre-built GitHub Actions. Why make it harder on myself? Because when things break in production (and they will), I know EXACTLY what's happening and how to fix it. No black boxes. No magic. Just solid understanding. Key Technical Wins: ✅ Docker Buildx with GitHub Actions cache (type=gha) ✅ SSH automation with heredoc for remote execution ✅ Multi-tag strategy for easy rollbacks ✅ Secrets management done right Four weeks ago, I was nervous about SSH'ing into a server. Today, I have a pipeline that automates it all. The difference? Consistent daily learning and building in public. Learning in Public: I'm documenting everything, the wins, the failures, the "why did I think that would work?" moments. Complete code and detailed notes are all on my GitHub. Want to follow along? I'm sharing every step of this journey. #DevOps #CICD #Docker #AWS #GitHubActions #LearningInPublic #CloudComputing #TechCareer #100DaysOfCode #SoftwareEngineering
To view or add a comment, sign in
-
-
Day 15 – Docker Images & Dockerfile Basics Day 15 of my 30-Day DevOps learning journey. Today I learned about Docker Images and Dockerfile, which are used to build containers in Docker. What is a Docker Image? A Docker Image is a read-only template that contains everything needed to run an application: Application code Runtime environment Libraries and dependencies System tools When we run an image, it creates a Docker Container. What is a Dockerfile? A Dockerfile is a text file that contains instructions used to build a Docker image. It defines: Base image Application files Dependencies Commands to run the application Example Dockerfile FROM node:18 WORKDIR /app COPY . . RUN npm install CMD ["node", "app.js"] Explanation: FROM → Base image WORKDIR → Working directory inside container COPY → Copy project files into container RUN → Install dependencies CMD → Run the application Build a Docker Image docker build -t myapp . Run the Container docker run -d -p 3000:3000 myapp Docker Images and Dockerfiles make applications portable, consistent, and easy to deploy, which is why they are widely used in DevOps and CI/CD pipelines. Tomorrow: Docker Volumes & Data Persistence Do follow me for more content on DevOps. Please see my GitHub Account - https://lnkd.in/gXTYxXXm #DevOps #Docker #Containers #CICD #CloudComputing #AWS #Jenkins #Kubernetes #Linux #DevOpsEngineer #TechLearning Shubham Londhe TrainWithShubham Abhishek Veeramalla
To view or add a comment, sign in
-
-
🚀 Completed: 20 Days of Docker Challenge 🐳 20 days ago, I started a personal challenge to deeply understand Docker from fundamentals to real production usage. Instead of just learning commands, I focused on how Docker is actually used in real DevOps environments. Here are the key things I learned during this journey 👇 🔹 Docker Fundamentals • Containers vs Virtual Machines • Docker Architecture • Images vs Containers • Writing production-ready Dockerfiles 🔹 Container Optimization • Multi-stage builds • Image size optimization • Layer caching 🔹 Storage & Networking • Docker Volumes • Bind mounts vs volumes • Docker networking (Bridge, Host, Overlay) 🔹 Troubleshooting & Debugging • Container logs • Debugging crash loops • Resource monitoring 🔹 CI/CD Integration • Docker + Jenkins pipelines • Container registries (Docker Hub, ECR) • Automated deployments 🔹 Production Best Practices • Environment variables & secrets • Security best practices • CPU & memory resource limits • Zero-downtime deployments 🔹 Real DevOps Workflow Developer → Git → CI/CD Pipeline → Docker Image → Container Registry → Deployment → Monitoring This challenge helped me understand that: ✔️ Docker is not just about containers ✔️ It enables consistent environments ✔️ It simplifies CI/CD pipelines ✔️ It improves deployment reliability Next step in my learning journey: ➡️ Kubernetes & Cloud-native infrastructure Thanks to everyone who followed this journey and shared feedback along the way. If you're learning DevOps, I highly recommend trying a learning challenge like this. Consistency compounds over time. To Read All Blogs: https://lnkd.in/gg_N6Fda #Docker #DevOps #Containers #LearningInPublic #Cloud #CI_CD
To view or add a comment, sign in
-
-
🚨 DevOps Learning: When GitHub Rejects Your Push Because of Large Files Today I ran into an interesting Git issue while pushing my DevSecOps project to GitHub from an EC2 instance. Everything looked fine locally, but GitHub rejected my push with this error: GH001: Large files detected The reason? Some binaries were accidentally committed to the repository: • argocd-linux-amd64 (205 MB) • awscliv2.zip (63 MB) • kubectl (55 MB) GitHub limits file sizes: ⚠ Recommended: 50 MB ❌ Maximum: 100 MB So the push failed. 🔧 The Fix 1️⃣ Remove the files from Git tracking git rm --cached <file> 2️⃣ Add them to .gitignore 3️⃣ Clean the Git history because large files still exist in previous commits git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch <file>' --prune-empty --tag-name-filter cat -- --all 4️⃣ Force push the cleaned history git push origin main --force 💡 DevOps Best Practice Never commit binaries like: • kubectl • awscli zip files • ArgoCD binaries Instead install them via scripts or package managers in your setup pipeline. This was a great reminder that Git tracks history, not just current files. Every small issue in DevOps is a learning opportunity 🚀 #DevOps #Git #GitHub #Kubernetes #ArgoCD #Terraform #LearningByDoing
To view or add a comment, sign in
-
🚀 From “docker run” to Mastering Containers — Day 29 & 30 of My DevOps Journey For a long time, Docker felt like magic. day29:[https://lnkd.in/dbz6jhcA] day30:[https://lnkd.in/dFczzYdj] You type: docker run nginx And suddenly… a web server is running. But what actually happens behind the scenes? Over the last 2 days, I stopped just running containers — and started understanding how they truly work. 📦 Day 29 – Docker Fundamentals ✔ What containers really are ✔ Containers vs Virtual Machines (the real difference) ✔ Docker architecture (Client → Daemon → Image → Container → Registry) ✔ Ran Nginx in browser ✔ Explored Ubuntu container interactively ✔ Managed container lifecycle basics The biggest realization? Containers don’t virtualize hardware. They virtualize the OS layer. That’s why they’re lightweight. That’s why they’re fast. That’s why modern DevOps runs on them. 🧠 Day 30 – Images & Container Lifecycle This is where things got serious. ✔ Pulled nginx, ubuntu, alpine ✔ Compared image sizes (Alpine ≈ 5MB 😳) ✔ Explored image layers using docker image history ✔ Understood layer caching & optimization ✔ Practiced full container lifecycle: Create → Start → Pause → Stop → Restart → Kill → Remove ✔ Inspected containers for IP, ports, mounts ✔ Cleaned up Docker disk usage Now I understand: 🔹 Images are layered, read-only templates 🔹 Containers are running instances 🔹 Layers make builds faster 🔹 Caching reduces CI/CD build time 🔹 Cleanup prevents disk bloat 💡 Why This Matters Every modern system today uses: • CI/CD pipelines • Kubernetes • Microservices • Cloud-native deployments And they all start with Docker. If you don’t understand images & lifecycle, you don’t truly understand modern deployment. 🔥 What Changed For Me Before: “I can run Docker.” Now: “I understand how Docker works internally.” That shift is powerful. Day 29 ✅ Day 30 ✅ DevOps consistency > motivation. On to Dockerfiles next 🚀 #DevOps #Docker #CloudComputing #Linux #Containers #LearningInPublic #100DaysOfDevOps #BuildInPublic #TechJourney #DevOpsKaJosh #TrainWithShubham
To view or add a comment, sign in
-
🚀 From Manual Deployments to Automated CI/CD with Docker & GitHub Actions A while ago, deploying my application looked something like this: SSH into the server → pull the latest code → rebuild the app → restart services → and silently pray nothing breaks 😅 It worked, but it always felt slow, repetitive, and a bit risky. So I finally took some time to automate the process using Docker and GitHub Actions, and honestly, it made deployments much smoother. Now the flow is simple: • Push code to GitHub • GitHub Actions triggers the pipeline automatically • Docker image gets built and tagged • Image is pushed to a container registry • Server pulls the latest image and redeploys the container That's it. No manual deployment steps anymore. What I liked most about this setup: ⚡ Deployments are much faster 🔁 Same environment everywhere thanks to Docker 🛡 Fewer chances of breaking things manually 📦 Clean, reproducible builds Stack used: Docker | GitHub Actions | Linux Server | SSH | Container Registry It's a small DevOps improvement, but it makes development much more reliable and stress-free. Next thing I want to experiment with: Zero-downtime deployments and Kubernetes. If you're still doing manual deployments, setting up a simple CI/CD pipeline is definitely worth the effort. #Docker #CICD #GitHubActions #DevOps #Automation #SoftwareDevelopment
To view or add a comment, sign in
-
-
Stop being a "YAML Engineer." Start becoming a Kubernetes Architect. Most people in DevOps stay stuck at the surface: kubectl apply -f manifest.yaml. It’s a great way to start, but if you want to build the next generation of infrastructure, you have to go deeper. You have to stop just using the tools and start building them. The tools we use every day Kubernetes, Docker, Terraform aren't built with YAML. They are built with Go. Every self-healing system in K8s is driven by a Controller. It’s the "brain" that actually makes the cluster smart. Moving from writing manifests to writing custom Controllers is how you transition from a user to a Platform Architect. When I started diving into building custom Controllers in Go, I realized one thing quickly: the learning curve for client-go is unnecessarily steep. Most tutorials show you a basic "Hello World" operator, but they completely skip the hard parts you actually face when deploying to a real cluster: - Handling race conditions in the Reconcile loop. - Managing the cache so your controller doesn't eat up your RAM. - Writing tests locally without spinning up a heavy cluster. I spent countless hours digging through source code and documentation to figure out these patterns. To save others the headache, I put together "The K8s Controller Cheat Sheet". It contains the exact snippets and logic I use to avoid those common traps. I'm thinking of turning this into a full deep-dive course, but I want to share this cheat sheet first. Want a copy of the Cheat Sheet? Drop a "BUILD" in the comments below, and I’ll DM you the link. Let’s build something more interesting than just another Deployment manifest. #kubernetes #golang #devops #platformengineering #softwarearchitecture
To view or add a comment, sign in
-
-
A few days ago while developing backend systems, I ran into a very common developer problem: “It works on my machine… but not on others.” The application was running perfectly on my system, but when someone else tried to run the same project, it failed because of different environments, dependencies, and configurations. That’s when I came across DevOps. I decided not to just learn it casually, but to deeply understand and master it. Over the past few days, while exploring DevOps, I have been working on strengthening my fundamentals and learning the tools that make modern software deployment reliable. So far I have learned and practiced: • Introduction to Docker • Docker Architecture • Installing and setting up Docker • Docker Images and Containers • Writing Dockerfiles • Docker Networking • Docker Volumes and Storage • Docker Compose • Docker Registry • Multistage Docker Builds • Monitoring and Logging in Docker • Introduction to container orchestration with Kubernetes (conceptual overview) To apply these concepts practically, I built two projects. Project 1: Django Notes App 🔹 Wrote the Dockerfile and docker-compose configuration 🔹 Learned how Nginx works as a reverse proxy 🔹 Deployed the application on my Ubuntu server GitHub Repo: https://lnkd.in/gYGHeVFH Project 2: Spring Boot Expense Tracker (Three-Tier Architecture) 🔹 Cloned the repository from mohamed0sawy's GitHub 🔹 The project originally had no Docker setup, no Nginx configuration, and no docker-compose file 🔹 Built the entire containerized setup from scratch 🔹 Configured Nginx reverse proxy so that port 80 routes traffic to the application 🔹 Implemented persistent data storage using Docker volumes so the data remains safe even if the application crashes 🔹 Added the video of this project below in 2x to save time GitHub Repo: https://lnkd.in/gUDAvuMr It was exciting to see how containerization makes applications portable, reproducible, and consistent across different machines. Next step: Leaning about kubernetes to orchestrate my containers in production Huge thanks to Shubham Londhe (#devopswalebhaiya) for explaining DevOps concepts and Docker in such a simple and practical way. Still learning and building 🚀 #docker #devops #backenddevelopment #linux #learninginpublic #softwareengineering #devsecops
To view or add a comment, sign in
-
🚀 3 Kubernetes Errors I Faced While Building My CI/CD Pipeline (and How I Fixed Them) While building my Docker → Jenkins → Kubernetes CI/CD project, deployment didn’t work perfectly at first. I ran into several Kubernetes errors and had to debug them step by step. Here are 3 issues I faced and what helped me solve them 👇 🐞 1. ErrImagePull / ImagePullBackOff Issue: Kubernetes failed to pull my Docker image. 🔎 Debugging kubectl describe pod <pod-name> Root Cause The image name in my deployment YAML didn’t match the image pushed to Docker Hub. ✅ Fix Corrected the image name and redeployed the deployment. 🐞 2. Pod Running but Application Not Accessible Issue: Pod status was Running, but I couldn’t access the application in the browser. 🔎 Debugging kubectl get svc Root Cause Mismatch between containerPort and targetPort. ✅ Fix Updated the service configuration so Kubernetes could correctly route traffic to the container. 🐞 3. Service Not Accessible from Browser Issue: Application still not reachable externally. 🔎 Debugging kubectl get nodes -o wide Root Cause I was using the wrong NodePort URL. ✅ Fix Accessed the application using: <NodeIP>:<NodePort> 💡 Biggest Lesson Building the pipeline was straightforward. But debugging Kubernetes errors taught me far more about how things actually work under the hood. Still learning and exploring more around Kubernetes, CI/CD pipelines, and DevOps practices. #DevOps #Kubernetes #Docker #Jenkins #CICD #LearningInPublic #DevOpsJourney
To view or add a comment, sign in
Explore related topics
- Cloud-native CI/CD Pipelines
- DevOps for Cloud Applications
- CI/CD Pipeline Optimization
- How to Implement CI/CD for AWS Cloud Projects
- How to Improve Software Delivery With CI/cd
- Kubernetes Deployment Skills for DevOps Engineers
- DevOps Principles and Practices
- How to Automate Kubernetes Stack Deployment
- Jenkins and Kubernetes Deployment Use Cases
- Tips for Continuous Improvement in DevOps Practices
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development