#100DaysOfDevOps - Day Forty -One Today I wrapped up the final part of the Jenkins CI pipeline I’ve been building for my app over the last few days. This was a big one for me because I didn’t want to rush through it. I wanted to explore what each stage was doing, why it was needed, and what kinds of issues could happen along the way. The final result was a working CI pipeline that now does the following: ✅ checks out the source code ✅ runs backend lint testing ✅ builds frontend and backend Docker images ✅ scans the images for vulnerabilities ✅ pushes the images to Docker Hub For today’s final stage, I worked on: securely storing Docker Hub credentials in Jenkins referencing those credentials inside the pipeline authenticating to Docker Hub without hardcoding secrets pushing both application images to the remote registry confirming the pushed images directly from Docker Hub I also touched on post actions in Jenkins pipelines, especially around cleanup and logging out after the job completes. One thing that really stood out to me through this whole process: A complete CI pipeline is not built by trying to solve everything at once. It is built by creating a simple framework first, then improving it stage by stage. This pipeline took multiple attempts, many failed runs, and several fixes. But that is exactly what made the learning real. And honestly, that is one thing I’m appreciating more in this journey: sometimes the value is not just in the final success, but in understanding every error that got you there. YouTube Video Link: https://lnkd.in/dxDcjaf2 #DevOps #100DaysOfDevOps #Jenkins #CICD #ContinuousIntegration #Docker #DockerHub #Pipeline #Automation #PlatformEngineering #CloudEngineering #LearningInPublic #TechdotSam
More Relevant Posts
-
Why does Jenkins still power pipelines at some of the world's largest engineering teams — 16 years after its release? Every team hits the same wall: — Manual deployments that don't scale — Forgotten test runs before production pushes — Staging environments drifting from prod That's when CI/CD stops being a buzzword and becomes an operational necessity. Here's why Jenkins remains the go-to answer: - Self-hosted & open-source — your infrastructure, your control - Language-agnostic — Java, Python, Go, Bash? Jenkins doesn't care - Master-agent architecture — scale builds across bare metal, Docker, or Kubernetes - 1,800+ plugins — integrates with virtually any tool in your stack - Zero per-minute build costs at high volume For regulated industries, financial services, or security-conscious teams, Jenkins isn't just a preference — it's often the only choice that fits compliance requirements. SaaS CI tools like GitHub Actions or GitLab CI are great. But when your code can't leave your network, when pipelines get complex, when you need complete control — Jenkins wins. I just published a deep-dive on what Jenkins really is, how its architecture works, and when to choose it over the modern SaaS alternatives. Read the full article here: https://lnkd.in/gGak92Ta This is Part 1 of a 3-part series — next up: Installing Jenkins on a Linux server the right way (secure, Nginx-proxied, never exposed to the public internet). Follow along if you're building or leveling up your DevOps stack. #Jenkins #DevOps #CICD #Automation #SoftwareEngineering #CloudComputing #DevSecOps #OpenSource
To view or add a comment, sign in
-
I recently automated a small but repetitive task in my workflow — deploying a Docker container. Earlier, every deployment meant running multiple commands manually: build the image, stop the old container, remove it, and run a new one. It worked, but it wasn’t efficient and was easy to mess up. So I wrote a simple shell script to automate the entire process. Now the script: Cleans up old containers and images Builds the latest Docker image Runs the container with the required configuration With this, deployment is reduced to just one command: ./deploy.sh What I found interesting is how even a small automation can improve consistency and save time. It also made me realize how this is exactly the first step toward building a proper CI/CD pipeline using tools like Jenkins or GitHub Actions. Still learning and improving every day 🚀 #DevOps #Automation #Docker #ShellScripting #LearningJourney
To view or add a comment, sign in
-
🚨 Pushing code without a CI pipeline? You're flying blind. ✈️ I've seen teams waste days chasing bugs that a 10-minute CI setup would've caught instantly. Here's why every repo needs a CI pipeline - and the 4 jobs it must have 👇 🧹 Job 1 - Code Formatting & Linting Nobody wants a PR review full of "missing semicolon" comments. Tools like black, flake8, or eslint enforce style automatically on every push. → Less bikeshedding. More shipping. ✅ 🔒 Job 2 - Security Scanning A hardcoded API key. A vulnerable dependency. A known CVE. These ship to prod more often than we'd like to admit. 😬 Tools like bandit, trivy, or snyk catch them at commit time — before they become a breach. → Security shouldn't be a manual checklist. Automate it. 🛡️ 🧪 Job 3 - Automated Testing Unit tests. Integration tests. All running on every PR. Failing test = blocked merge. Period. → Refactor fearlessly. Onboard new devs without anxiety. 💪 🐳 Job 4 - Docker Build + Smoke Test "It works on my machine" is not a deployment strategy. 😅 Build the Docker image in CI. Run a smoke test against the container. → "Works in Docker" becomes a verified fact, not a hope. Why does this matter? 🤔 ✅ Consistent quality across every contributor ✅ Catch regressions before they reach main ✅ Automated security - no manual gatekeeping ✅ Reproducible builds on every single commit ✅ Faster code reviews - less nit-picking ✅ The confidence to deploy often and sleep well 😴 The setup? A few hours. The payoff? Months of saved debugging, fewer incidents, and a team that actually trusts the codebase. CI isn't a luxury for big teams. 🔁 It's hygiene for every repo - from solo side projects to production systems. Drop in the comments if your team already has CI set up! Or if you're still pushing straight to main... #DevOps #CI #Docker #GitHub #SoftwareEngineering #Automation #BackendDevelopment #OpenSource
To view or add a comment, sign in
-
I learned something new today!! This diagram helped me understand how modern applications actually move from code → production using tools like Jenkins and Docker. Here’s the flow in simple terms: ▪️ 1. Pull Code Jenkins fetches code from GitHub ▪️ 2. Verify Basic checks to ensure everything is correct ▪️ 3. Build Images Docker builds application images ▪️ 4. Push to DockerHub Images are stored in a central registry ▪️ 5. Deploy Containers are started using Docker Compose ▪️ 6. Cleanup Unused images are removed to save space What I realized: CI/CD is not just automation — it’s about making deployments fast, consistent, and reliable. This is where development meets real-world production systems If you're learning backend or full stack, understanding pipelines like this is a game changer. What part of CI/CD do you find most confusing? 🤔 #DevOps #Jenkins #Docker #CICD #BackendDevelopment #FullStack #SoftwareEngineering #CodingJourney
To view or add a comment, sign in
-
-
🔄 Most developers hear "CI/CD" in every job description. But do you actually know what happens behind the scenes? Here's how a CI/CD pipeline works — simplified: 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 (𝗖𝗜): → Developer pushes code → Automated build triggers → Unit tests run → Code quality checks → Build artifact created 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗗𝗲𝗹𝗶𝘃𝗲𝗿𝘆 (𝗖𝗗): → Deploy to staging → Integration tests run → Manual approval gate → Deploy to production → Monitor & rollback if needed 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁: → Same as CD but NO manual gate → Every passing commit goes live Popular tools: Jenkins, GitHub Actions, GitLab CI, CircleCI, AWS CodePipeline The goal? Ship faster, catch bugs earlier, and automate the boring stuff. ♻️ Repost if this helped someone. 💬 What CI/CD tool does your team use? #CICD #DevOps #SoftwareEngineering #GitHub #Automation #Programming #WebDevelopment
To view or add a comment, sign in
-
Jenkins vs. ArgoCD: Why the "Push" model is a bottleneck In the early days of Kubernetes, we simply adapted our old Jenkins habits. We treated the cluster as just another server to PUSH code to. But as the industry has matured, we’ve realized that for a secure, scalable, and self-healing infrastructure, the PULL model (GitOps) is the clear winner. The Traditional "Push" Model (Jenkins CI/CD): 🔹 How it works: Jenkins builds the image and then fires a kubectl apply command at the cluster. 🔹 The Challenge: Jenkins needs administrative credentials for your cluster. If someone manually changes a setting in K8s, Jenkins has no idea. This is "Configuration Drift." 🔹 Security: Storing cluster keys in your CI tool is a major security risk. The Modern "Pull" Model (ArgoCD + GitOps): 🔹 How it works: Jenkins focus is shifted entirely to CI (testing and building). ArgoCD takes over CD. It constantly watches your Git repo and PULLS the configuration into the cluster. 🔹 The Benefit: If the cluster state differs from Git, ArgoCD automatically "heals" it. No cluster credentials leave the cluster, making it infinitely more secure. 🔹 Visibility: You get a real-time visual map of your application health directly in the ArgoCD UI. Separating CI (Jenkins) from CD (ArgoCD) isn't just about using new tools; it’s about moving to a declarative, auditable, and secure standard that many enterprise customers are now demanding. #Jenkins #ArgoCD #GitOps #Kubernetes #DevOps #CloudNative #Automation #SoftwareEngineering
To view or add a comment, sign in
-
-
For a long time, this sat quietly in my bucket list… “Run my automation tests in a CI pipeline.” Not days. Not weeks. More than a year. And this week… I finally did it. What made this journey special wasn’t just the success — it was the failures along the way. Each error message felt like a lesson: - “This is not how GitLab works.” - “Your YAML isn’t wrong… but it’s not right either.” And slowly, things began to make sense. --- What I truly learned: - YAML is not just syntax — it’s execution logic - CI pipelines are not about running tests — they are about building environments - Every small mistake (paths, shell type, cache, artifacts) teaches something foundational - GitLab CI behaves differently than we expect — and that’s where real learning happens --- There’s a quiet satisfaction in watching your tests run in a pipeline… not on your machine, but in a system you built step by step. It feels like moving from: “I run tests” to “I built a system that runs tests” --- Grateful for the learning, the mistakes, and the persistence. #QA #Automation #Cypress #GitLabCI #LearningJourney #DevOps #node #SoftwareTesting
To view or add a comment, sign in
-
Most CI/CD pipelines fail for the same reason — no clear stages. After 4 years in DevOps, here's the multi-stage GitHub Actions pipeline I recommend to every engineer on my team: ━━━━━━━━━━━━━━━━━━━ Stage 1 → Test Stage 2 → Build & tag Docker image Stage 3 → Deploy to Staging Stage 4 → Deploy to Production (with manual approval) ━━━━━━━━━━━━━━━━━━━ 3 things that make this bulletproof: 1️⃣ Use needs: to chain jobs — if tests fail, nothing else runs 2️⃣ Tag images with github.sha — every build is fully traceable 3️⃣ Use GitHub Environments for prod — enforces human approval before anything goes live You don't need a complex tool to do this. A single YAML file in .github/workflows/ is enough to build a production-grade pipeline. Save this post for when you set yours up. What does your CI/CD stack look like? Drop it in the comments 👇 #DevOps #GitHubActions #CICD #Docker #Kubernetes #CloudNative #DevOpsEngineer #SoftwareEngineering
To view or add a comment, sign in
-
This week I ran into a classic DevOps issue while working with Jenkins and Docker. I had a Jenkins pipeline that builds and pushes a Docker image. It was working perfectly before — same code, same Dockerfile, same pipeline. Then suddenly the build failed. The error: "npm ERR! Cannot find module 'promise-retry'" At first, it didn’t make sense. I hadn’t changed anything in the code or Dockerfile. After digging deeper, I realized the real issue: Even though my Dockerfile didn’t change, this line was the culprit: FROM node:22-alpine3.22 This is a mutable tag. Which means: Docker pulled a newer version of the base image That image had updated npm behavior My step npm install -g npm@latest broke due to incompatibility. 💡 Key Lesson: Docker builds are NOT deterministic unless you pin versions. ✅ Fix I applied: Removed npm install -g npm@latest Switched to a stable base image (node:20-alpine) (Optional) Pinned npm to a specific version 🚀 Takeaways: Avoid using latest (for Node, npm, or anything) Always pin versions in production systems CI/CD failures are often caused by environment changes, not code changes Jenkins may expose issues that don’t appear locally due to caching This was a great reminder that in DevOps: 👉 “If it’s not pinned, it’s not predictable.” #DevOps #Docker #Jenkins #CI_CD #Learning #Debugging
To view or add a comment, sign in
-
Day 39 of #90DaysOfDevOps — Today I didn't write a single pipeline. Instead, I spent the day understanding WHY CI/CD exists before touching any tooling. Here's what clicked for me today: 🔴 The Problem Imagine 5 developers all manually deploying to production. Merge conflicts, config mismatches, "it works on my machine" — a team can safely deploy maybe 1-2 times a day before mistakes creep in. CI/CD teams deploy hundreds of times a day. 🟡 CI vs CD vs CD • Continuous Integration — push code frequently, automatically build and test it, catch breaks in minutes not days • Continuous Delivery — pipeline is automated, but a human approves the final production release • Continuous Deployment — zero human involvement, code goes live automatically if all tests pass The difference between Delivery and Deployment? One human approval gate. 🟢 Real World I opened FastAPI's GitHub repo and read their test.yml workflow. Every pull request automatically runs tests across Windows, macOS and Ubuntu on Python 3.10 through 3.14. If any test fails, the PR cannot merge. That's not a pipeline failing. That's CI/CD doing exactly its job. Biggest lesson today: CI/CD is a practice, not a tool. GitHub Actions, Jenkins, GitLab CI — these are just tools that implement the practice. Day 40 tomorrow — time to actually build a pipeline. #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham #CICD #DevOps #CloudComputing
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development