Day 39 of #90DaysOfDevOps — Today I didn't write a single pipeline. Instead, I spent the day understanding WHY CI/CD exists before touching any tooling. Here's what clicked for me today: 🔴 The Problem Imagine 5 developers all manually deploying to production. Merge conflicts, config mismatches, "it works on my machine" — a team can safely deploy maybe 1-2 times a day before mistakes creep in. CI/CD teams deploy hundreds of times a day. 🟡 CI vs CD vs CD • Continuous Integration — push code frequently, automatically build and test it, catch breaks in minutes not days • Continuous Delivery — pipeline is automated, but a human approves the final production release • Continuous Deployment — zero human involvement, code goes live automatically if all tests pass The difference between Delivery and Deployment? One human approval gate. 🟢 Real World I opened FastAPI's GitHub repo and read their test.yml workflow. Every pull request automatically runs tests across Windows, macOS and Ubuntu on Python 3.10 through 3.14. If any test fails, the PR cannot merge. That's not a pipeline failing. That's CI/CD doing exactly its job. Biggest lesson today: CI/CD is a practice, not a tool. GitHub Actions, Jenkins, GitLab CI — these are just tools that implement the practice. Day 40 tomorrow — time to actually build a pipeline. #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham #CICD #DevOps #CloudComputing
Understanding CI/CD Before Building a Pipeline
More Relevant Posts
-
🚀 We cut our deployment time by 60%. Here's exactly how we did it. 6 months ago, our deploys were painful. Manual steps. Flaky pipelines. Engineers burned out waiting. Then we rebuilt everything with Jenkins + ArgoCD — and the results were staggering. Here's what changed: #Before: ❌ ~45 min average deploy time ❌ Human errors on every 3rd release ❌ Zero visibility into what's running in prod ❌ Devs dreading deploy Fridays #After: ✅ ~18 min average deploy time (-60%) ✅ GitOps-driven — Git is the single source of truth ✅ ArgoCD syncs automatically, drift detected instantly ✅ Devs ship with confidence, not anxiety The 3 things that made the difference: 1️⃣ Jenkins for CI, ArgoCD for CD — clear separation of concerns Jenkins builds, tests, and pushes the image. ArgoCD owns delivery. No blurring of responsibilities. 2️⃣ GitOps = Auditability on autopilot Every change to prod is a Git commit. Who changed what, when, and why — always visible. 3️⃣ Auto-sync + health checks killed manual approvals ArgoCD monitors Kubernetes state continuously. Drift? Caught and corrected automatically. The real win wasn't just speed. It was trust — trust that what's in Git is what's in prod. Trust that the team can deploy daily without fear. If your team is still doing manual deploys or struggling with slow pipelines, this stack is worth exploring. Happy to share our Jenkins pipeline template or ArgoCD app configs — just drop a "Share" in the comments. 👇 #DevOps #CI #CD #Jenkins #ArgoCD #GitOps #Kubernetes #SoftwareEngineering #CloudNative #DeploymentAutomation #SRE #PlatformEngineering
To view or add a comment, sign in
-
-
I learned something new today!! This diagram helped me understand how modern applications actually move from code → production using tools like Jenkins and Docker. Here’s the flow in simple terms: ▪️ 1. Pull Code Jenkins fetches code from GitHub ▪️ 2. Verify Basic checks to ensure everything is correct ▪️ 3. Build Images Docker builds application images ▪️ 4. Push to DockerHub Images are stored in a central registry ▪️ 5. Deploy Containers are started using Docker Compose ▪️ 6. Cleanup Unused images are removed to save space What I realized: CI/CD is not just automation — it’s about making deployments fast, consistent, and reliable. This is where development meets real-world production systems If you're learning backend or full stack, understanding pipelines like this is a game changer. What part of CI/CD do you find most confusing? 🤔 #DevOps #Jenkins #Docker #CICD #BackendDevelopment #FullStack #SoftwareEngineering #CodingJourney
To view or add a comment, sign in
-
-
🚀 Just finished the Docker course on Boot.dev! 🚀 I’m excited to share that I’ve learned the fundamentals of Docker—a key technology in modern DevOps and CI/CD pipelines. Docker makes it simple and fast to deploy new versions of code by packaging applications and their dependencies into preconfigured environments. This not only speeds up deployment, but also reduces overhead and eliminates the “it works on my machine” problem. Docker is a core part of the CI/CD (Continuous Integration/Continuous Deployment) process, enabling teams to deliver software quickly and reliably. Here’s a high-level overview of a typical CI/CD deployment process: The Deployment Process: 1. The developer (you) writes some new code 2. The developer commits the code to Git 3. The developer pushes a new branch to GitHub 4. The developer opens a pull request to the main branch 5. A teammate reviews the PR and approves it (if it looks good) 6. The developer merges the pull request 7. Upon merging, an automated script, perhaps a GitHub action, is started 8. The script builds the code (if it's a compiled language) 9. The script builds a new docker image with the latest program 10. The script pushes the new image to Docker Hub 11. The server that runs the containers, perhaps a Kubernetes cluster, is told there is a new version 12. The k8s cluster pulls down the latest image 13. The k8s cluster shuts down old containers as it spins up new containers of the latest image This process ensures that new features and fixes can be delivered to users quickly, safely, and consistently. image credit: Boot.dev Docker course #docker #cicd #devops #softwaredevelopment #bootdev #learning
To view or add a comment, sign in
-
-
This week I ran into a classic DevOps issue while working with Jenkins and Docker. I had a Jenkins pipeline that builds and pushes a Docker image. It was working perfectly before — same code, same Dockerfile, same pipeline. Then suddenly the build failed. The error: "npm ERR! Cannot find module 'promise-retry'" At first, it didn’t make sense. I hadn’t changed anything in the code or Dockerfile. After digging deeper, I realized the real issue: Even though my Dockerfile didn’t change, this line was the culprit: FROM node:22-alpine3.22 This is a mutable tag. Which means: Docker pulled a newer version of the base image That image had updated npm behavior My step npm install -g npm@latest broke due to incompatibility. 💡 Key Lesson: Docker builds are NOT deterministic unless you pin versions. ✅ Fix I applied: Removed npm install -g npm@latest Switched to a stable base image (node:20-alpine) (Optional) Pinned npm to a specific version 🚀 Takeaways: Avoid using latest (for Node, npm, or anything) Always pin versions in production systems CI/CD failures are often caused by environment changes, not code changes Jenkins may expose issues that don’t appear locally due to caching This was a great reminder that in DevOps: 👉 “If it’s not pinned, it’s not predictable.” #DevOps #Docker #Jenkins #CI_CD #Learning #Debugging
To view or add a comment, sign in
-
CI/CD is just a water pipeline. Let me prove it. Imagine this: Water Source -> Filter -> Quality Check -> Storage Tank -> Distribution -> House Now map this to software: Code -> Lint -> Tests -> Build -> Docker Image -> Deployment If the water is dirty, it shouldn’t reach the house. If the tests fail, the code shouldn’t reach production. That’s what CI/CD really is, a pipeline that ensures only clean, tested, and build-ready code reaches production. After exploring it for a while I wrote a blog explaining the concepts in a simple way: - What really happens when a workflow runs - Difference between Workflow vs Job vs Step vs Runner - Why each job runs on a separate machine - Artifacts vs Cache - How secrets are injected at runtime (and why .env should never be in Docker images) - Why concurrency matters in deployment - How data is passed between steps and between jobs The link for the blog post is in comments below 👇 👇 , do check it out. If you're learning backend or DevOps, try thinking about CI/CD as a pipeline system, it makes everything much easier to understand. I’m still learning, so feedback is welcome. #githubactions #cicd #docker #backend #devops
To view or add a comment, sign in
-
Day 1 — “Set up Jenkins and write a Jenkinsfile.” I opened Google. Found a 2019 tutorial. UI didn’t match. 3 hours… wasted. Nobody explained the basics. So here’s the truth 👇 👉 Jenkins doesn’t build or deploy anything 👉 You define everything in a Jenkinsfile 👉 Jenkins just executes it That’s it. And the rule that nobody follows until they break production: ⚠ NEVER click "Update All." Biggest mistake I made (and most beginners do): Running builds on the controller. It works… Until your system crashes at the worst time. 👉 Controller = brain (never run builds) 👉 Agents = actual workers Real DevOps moment: Friday. Release day. Builds stuck. Agents offline. Everyone watching. What matters then? Not theory. 👉 Disk check 👉 Memory check 👉 Process check Here's the exact checklist I now follow: → Manage Jenkins → Nodes → find the red circles → read "Disconnect cause" → SSH into the agent machine → df -h → check disk. 100% full = most common cause. Ever. → free -h → check memory. Java OutOfMemoryError is #2. → ps aux | grep jenkins → is the Java process even running? → Restart the agent process Prevention that actually works: Alert at 80% disk — not 100% Add cleanWs() to every Jenkinsfile Backup plugins every Sunday night The person who knows this checklist cold never panics in a war room. Be that person. That’s real experience. Truth is — I didn’t understand everything at first. But I did one thing right: ✔ I built it myself ✔ I broke it ✔ I fixed it If you’re starting DevOps: Run this → docker run -p 8080:8080 jenkins/jenkins:lts Write your first Jenkinsfile. Break it. Fix it. Repeat. That’s worth more than any certificate. If this helped you: Save it. You’ll need it later. #DevOps #Jenkins #CICD #CareerSwitch #LearnDevOps #Automation
To view or add a comment, sign in
-
-
More code will be written in the next 5 years than in all of history. That raises a real challenge for DevOps/SRE. Tooling was built for humans, not always-on agents generating code at scale. We may be heading toward a bottleneck in how we manage and version that explosion of code and state. Interesting to see ideas like Artifacts emerging as teams start rethinking this. https://lnkd.in/gc39tYkz #DevOps #SRE #SiteReliability #Engineer #Git
To view or add a comment, sign in
-
I made a manual change directly in my cluster to test something quickly. Flux reverted it within 60 seconds. At first I was annoyed. Then I realised that was exactly the point. Drift detection is a Flux feature that watches for any difference between what is in Git and what is actually running in the cluster. The moment it finds one it reconciles back to Git automatically. That means if anyone, including me, runs a manual kubectl edit or kubectl patch directly on a resource that Flux manages, Flux will undo it. Here is why that is a feature not a bug. In a real team environment someone will always make a quick manual change to fix something urgently. Without drift detection that change lives in the cluster but not in Git. Over time those undocumented changes accumulate. Nobody knows what is actually running anymore or why it differs from the repo. With drift detection Git is always the truth. Always. No exceptions. The discipline it enforces is uncomfortable at first. You cannot just tweak things directly anymore. Every change has to go through Git. But that discomfort is the whole point. It forces good habits and makes your infrastructure trustworthy. Have you ever had an environment drift so far from its config that nobody knew what was actually running? 👇 Follow me, I am documenting everything I build and learn in my home lab. #GitOps #Kubernetes #DevOps #FluxCD #CloudNative
To view or add a comment, sign in
-
♻️ Jenkins Lessons & Best Practices — What Really Matters in CI/CD After working with Jenkins across both VM-based and Docker environments, I’ve learned that success with CI/CD isn’t about just “making pipelines run” — it’s about making them reliable, secure, and scalable. Here are some key lessons and best practices I follow: ➡️ Lessons from Real Usage 💡 Jenkins works best when treated as code, not just a UI tool 💡 Environment consistency is critical (Docker agents help a lot) 💡 Debugging pipelines teaches you more than writing them 💡 Simplicity > over-engineering in pipeline design ➡️ Best Practices I Rely On ✔️ Use Jenkinsfile (Pipeline as Code) instead of freestyle jobs ✔️ Store secrets in Jenkins Credentials Manager (never hardcode) ✔️ Use Docker agents for consistent and reproducible builds ✔️ Keep plugins minimal and regularly updated ✔️ Clean workspaces to prevent disk space issues ✔️ Implement proper access control & security hardening ✔️ Use structured logging and monitor builds proactively ➡️ Setup Insights ✔️ Ubuntu install → stable, more control (/var/lib/jenkins) ✔️ Docker setup → fast, portable (/var/jenkins_home) Both have their place depending on the use case. Jenkins has been a solid foundation for understanding how modern CI/CD systems should be designed and maintained at scale. #Jenkins #DevOps #CI_CD #Automation #Docker #BestPractices #CloudComputing
To view or add a comment, sign in
-
-
GitOps changed how I think about deployments forever. Two years ago, our team was manually applying Kubernetes manifests, praying nothing drifted in production. Sound familiar? Then we adopted GitOps — Git as the single source of truth for infrastructure state. The result: ✅ Deployments became auditable (every change = a PR) ✅ Rollbacks took 30 seconds, not 30 minutes ✅ Drift detection caught misconfigurations before they became incidents Here's the mental model that clicked for me: Traditional CI/CD = push-based. Your pipeline pushes changes to the cluster. GitOps = pull-based. An agent (ArgoCD, Flux) watches Git and pulls changes to match desired state. That inversion is everything. The cluster always converges toward what's in Git. No more "it works in staging but not prod" mysteries. Getting started checklist: 1. Store ALL manifests in Git (Helm charts, Kustomize overlays) 2. Set up ArgoCD or Flux in your cluster 3. Lock direct kubectl apply access 4. Add branch protection + PR reviews for infra changes The learning curve is real, but the operational calm on the other side is worth it. What's your GitOps stack? Drop it below 👇 #GitOps #ArgoCD #Flux #Kubernetes #DevOps #CI_CD
To view or add a comment, sign in
Explore related topics
- Continuous Integration and Deployment (CI/CD)
- Cloud-native CI/CD Pipelines
- CI/CD Pipeline Optimization
- Continuous Deployment Techniques
- Continuous Delivery Pipelines
- Automated Deployment Pipelines
- How to Understand CI/CD Processes
- How to Implement CI/CD for AWS Cloud Projects
- How to Improve Software Delivery With CI/cd
- Tips for Continuous Improvement in DevOps Practices
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development