I broke my own server… just by deploying code 😅 That’s when I realized — I don’t actually understand deployment. Before this, my “CI/CD pipeline” was: ssh → git pull → npm install → pm2 restart …and pray nothing breaks 🙏 Works fine… until it doesn’t. One wrong env, one missed step, one failed install — and production is down. So I decided to fix this properly. Started from basics: 📦 Used "scp" to push builds manually 🔑 Used "ssh" to run commands on VPS ⚙️ Wrote small bash scripts to automate steps Felt powerful… but still risky. Because I was the pipeline. Then I moved to GitHub Actions. Now every push to main: • builds the project • securely connects to VPS • deploys code • restarts services No manual login. No “did I run that command?” No panic. But here’s what actually changed my thinking: «CI/CD is not about saving time. It’s about removing human mistakes from production.» Also learned the hard way: • Always test scripts on staging first • Make deployments idempotent • Never trust ".env" blindly • Logs > assumptions Now deployment feels less like a risk… and more like a system. Still learning DevOps. But at least now — pushing code doesn’t feel like gambling 🎯 What’s the worst thing you’ve broken while deploying? 😄 #CICD #DevOps #GitHubActions #VPS #BackendEngineering #Automation #DeveloperLife
Deploying Code Gone Wrong: A DevOps Lesson Learned
More Relevant Posts
-
The pipeline was green. Deployment said successful. Production was running the wrong code for 3 hours. No alerts. No red dashboards. Nothing. I've been burned by silent CI/CD failures more times than I'd like to admit. The dangerous ones aren't the crashes — they're the failures that look like success. Here are the 3 that hurt the most: 1. Docker cached the wrong image Build finished in 12 seconds. Felt fast. Turned out Docker served a previously cached layer. Yesterday's code went to production. The build log looked completely normal. 2. Tests reported zero failures — because they never ran The test framework found no matching files, ran zero tests, exited with code 0. Green badge. A real bug reached production that the tests should have caught. 3. Deployment succeeded. Old code still running. Kubernetes rollout reported complete. New image never actually pulled — node had the old one cached with imagePullPolicy: IfNotPresent. "Deployment succeeded" and "new code is live" are not the same thing. The root cause in every case was the same: The pipeline verified that steps executed — not that outcomes were correct. The fixes aren't complex: → Embed Git SHA in every image. Verify it post-deploy → Fail the pipeline if zero tests ran → Never use :latest in Kubernetes. Always deploy with image SHA I wrote the full breakdown with code examples for GitHub Actions, Jenkins, and Kubernetes on Dev.to. Link in the comments 👇 Have you hit a silent pipeline failure? Drop it below — genuinely curious what broke. #DevOps #CICD #Docker #Kubernetes #Jenkins #SRE #PlatformEngineering #Cloud
To view or add a comment, sign in
-
I built a complete CI/CD pipeline from scratch on my VPS using GitHub Actions and Docker. I was initially manually SSH-ing into my server to pull updates but now the setup has evolved into a fully automated deployment system. Now, every push to GitHub triggers: • Automatic deployment to a staging environment. • Docker-based rebuild and restart on my VPS. • Backup creation before every deployment and live testing. • Retention policy (keeps only last 3 backups to manage storage). • Safe promotion from staging → production via Git branching. Architecture: GitHub → GitHub Actions → VPS (SSH) → Docker → Nginx Proxy Manager → Live Sites Tech stack: • GitHub Actions • Docker & Docker Compose • Ubuntu VPS • Nginx Proxy Manager • SSH key authentication Key learning: Building systems is not about tools — it's about designing reliable flows between them. This project has helped me grow from deploying apps to engineering deployment systems. Next step: extending this to multiple projects and scaling automation further. Below is a detailed architecture diagram that shows my implementation. #DevOps #Docker #DeploymentSystems #WorkflowAutomation #SSH
To view or add a comment, sign in
-
-
This week I ran into a classic DevOps issue while working with Jenkins and Docker. I had a Jenkins pipeline that builds and pushes a Docker image. It was working perfectly before — same code, same Dockerfile, same pipeline. Then suddenly the build failed. The error: "npm ERR! Cannot find module 'promise-retry'" At first, it didn’t make sense. I hadn’t changed anything in the code or Dockerfile. After digging deeper, I realized the real issue: Even though my Dockerfile didn’t change, this line was the culprit: FROM node:22-alpine3.22 This is a mutable tag. Which means: Docker pulled a newer version of the base image That image had updated npm behavior My step npm install -g npm@latest broke due to incompatibility. 💡 Key Lesson: Docker builds are NOT deterministic unless you pin versions. ✅ Fix I applied: Removed npm install -g npm@latest Switched to a stable base image (node:20-alpine) (Optional) Pinned npm to a specific version 🚀 Takeaways: Avoid using latest (for Node, npm, or anything) Always pin versions in production systems CI/CD failures are often caused by environment changes, not code changes Jenkins may expose issues that don’t appear locally due to caching This was a great reminder that in DevOps: 👉 “If it’s not pinned, it’s not predictable.” #DevOps #Docker #Jenkins #CI_CD #Learning #Debugging
To view or add a comment, sign in
-
🚀 Built an End-to-End DevSecOps Pipeline — with Real Debugging Lessons! I recently implemented a DevSecOps CI/CD pipeline using GitHub Actions, integrating security checks across every stage of the workflow. 🔧 Pipeline Capabilities: ✅ Code Quality Checks ✅ Secrets Scanning ✅ Dependency Vulnerability Scan ✅ Dockerfile Validation ✅ Secure Image Build & Push ✅ Container Image Scanning using Trivy 📊 But here’s the real learning 👇 During implementation, the pipeline didn’t just “work” on the first try — I encountered multiple failures while: ❌ Configuring Trivy image scan (imageref issues) ❌ Handling secrets inheritance across jobs ❌ Debugging failed workflow runs and YAML misconfigurations 🔍 I iteratively debugged: Fixed incorrect image references in Trivy scan Corrected GitHub Actions syntax and job dependencies Ensured proper secrets flow across jobs Validated pipeline execution step-by-step 💡 This reinforced a key DevOps principle: 👉 Pipelines are built through debugging, not just design. 📈 Outcome: ✔️ Fully working secure CI/CD pipeline ✔️ Integrated shift-left security ✔️ Improved troubleshooting & observability skills 🚀 Next Steps: Add Kubernetes deployment stage Integrate policy enforcement (OPA/Azure Policy) Automate remediation workflows Would love to hear how others are implementing DevSecOps in their pipelines! #DevSecOps #GitHubActions #Docker #Trivy #CI/CD #CloudSecurity #Automation #LearningByDoing Github repo -> https://lnkd.in/gMkzPevn #DevSecOps #GitHubActions #TrainWithShubham #90DaysofDevops
To view or add a comment, sign in
-
Why does Jenkins still power pipelines at some of the world's largest engineering teams — 16 years after its release? Every team hits the same wall: — Manual deployments that don't scale — Forgotten test runs before production pushes — Staging environments drifting from prod That's when CI/CD stops being a buzzword and becomes an operational necessity. Here's why Jenkins remains the go-to answer: - Self-hosted & open-source — your infrastructure, your control - Language-agnostic — Java, Python, Go, Bash? Jenkins doesn't care - Master-agent architecture — scale builds across bare metal, Docker, or Kubernetes - 1,800+ plugins — integrates with virtually any tool in your stack - Zero per-minute build costs at high volume For regulated industries, financial services, or security-conscious teams, Jenkins isn't just a preference — it's often the only choice that fits compliance requirements. SaaS CI tools like GitHub Actions or GitLab CI are great. But when your code can't leave your network, when pipelines get complex, when you need complete control — Jenkins wins. I just published a deep-dive on what Jenkins really is, how its architecture works, and when to choose it over the modern SaaS alternatives. Read the full article here: https://lnkd.in/gGak92Ta This is Part 1 of a 3-part series — next up: Installing Jenkins on a Linux server the right way (secure, Nginx-proxied, never exposed to the public internet). Follow along if you're building or leveling up your DevOps stack. #Jenkins #DevOps #CICD #Automation #SoftwareEngineering #CloudComputing #DevSecOps #OpenSource
To view or add a comment, sign in
-
I built a GitHub Action that reviews pull requests before a human has to. In most CI/CD workflows, a significant amount of time is spent reviewing pull requests that contain avoidable issues - unclear descriptions, missing tests, leftover debug code, or even risky patterns. To address this, I developed truepr, a lightweight GitHub Action that automatically analyzes pull requests and provides a structured quality assessment. It evaluates four key areas: - The code diff (for security risks, bad practices, and missing tests) - The pull request description (clarity, completeness, and intent) - The linked issue (context, reproducibility, and quality) - Contributor history (to provide additional context) Based on this, it generates: - A score from 0 to 100 - A grade (A to F) - A clear recommendation (approve, review, request changes, or flag) The goal is not to replace human review, but to reduce time spent on low-quality pull requests and help teams focus on meaningful feedback. truepr runs entirely within GitHub Actions, requires no external services or API keys, and can be set up in minutes. This is particularly useful for teams and maintainers working with high pull request volumes, where early signal and consistency in review standards are critical. I would welcome feedback from developers, maintainers, and DevOps professionals working in CI/CD environments. Repository: https://lnkd.in/eWRdxEF7 I strongly believe in automation, and that even small, focused tools can significantly reduce friction and save valuable time. #github #opensource #devops #cicd #softwareengineering
To view or add a comment, sign in
-
-
Just shared a new post on my blog. A practical look at how I design CI/CD pipelines with GitHub Actions — prioritizing clarity, fast feedback cycles, and maintainability over unnecessary complexity. These are patterns that have worked well for me in real projects, especially when scaling workflows and keeping deployments predictable. If you're refining your pipeline strategy, this might be worth a read :) https://lnkd.in/dKbd6zEa #DevOps #CICD #GitHubActions #SoftwareEngineering
To view or add a comment, sign in
-
🔧 Lab Title: 10 - Ingress - Connecting to Applications outside cluster 🚀 Project Steps PDF Your Easy-to-Follow Guide :https://lnkd.in/g486Ki7y 🔗 GitLab Repo Code:https://lnkd.in/gEuKsWJJ 🔗 DevsecOps Portfolio:https://lnkd.in/g6AP-FNQ 💼 DevOps Portfolio: https://lnkd.in/gT-YQE5U 🔗 Kubernetes Portfolio:https://lnkd.in/gUqZrdYh 🔗 GitLab CI/CD Portfolio:https://lnkd.in/g2jhKsts Summary: Today, I deployed the Kubernetes Dashboard using Ingress on Minikube. I enabled the Ingress controller, set up a custom domain (dashboard.com), and used Minikube tunnel for external access 🌐. Learned key concepts like Ingress routing, namespace management, and external service exposure for secure, efficient cluster access. Tools Used: Minikube 🖥️: Local Kubernetes environment kubectl 🛠️: Kubernetes CLI for applying and checking resources NGINX Ingress Controller 🔀: Manages HTTP traffic routing Windows Hosts File 🗂️: Mapped custom domain to localhost Minikube Tunnel 🌉: Enables LoadBalancer & Ingress access locally Skills Gained: Enabled & configured Ingress on Minikube ✔️ Created Ingress rules for domain-based routing 🌍 Managed local domain resolution & tunnels for access 🔐 Challenges Faced: Editing hosts file needed admin rights ⚠️ Keeping Minikube tunnel active for access 🔄 Why It Matters: This lab builds crucial skills in exposing Kubernetes services securely and efficiently — essential for real-world DevOps roles managing cloud-native apps. Mastery of Ingress and local tunneling streamlines access and monitoring of cluster workloads. 📌 hashtag#DevOps hashtag#Kubernetes hashtag#Ingress hashtag#Minikube hashtag#NGINX hashtag#CloudNative hashtag#LearningJourney 🚀 Stay tuned! Next: Project 12 - ConfigMap & Secret Volume Types 🔥
To view or add a comment, sign in
-
-
Jenkins vs. ArgoCD: Why the "Push" model is a bottleneck In the early days of Kubernetes, we simply adapted our old Jenkins habits. We treated the cluster as just another server to PUSH code to. But as the industry has matured, we’ve realized that for a secure, scalable, and self-healing infrastructure, the PULL model (GitOps) is the clear winner. The Traditional "Push" Model (Jenkins CI/CD): 🔹 How it works: Jenkins builds the image and then fires a kubectl apply command at the cluster. 🔹 The Challenge: Jenkins needs administrative credentials for your cluster. If someone manually changes a setting in K8s, Jenkins has no idea. This is "Configuration Drift." 🔹 Security: Storing cluster keys in your CI tool is a major security risk. The Modern "Pull" Model (ArgoCD + GitOps): 🔹 How it works: Jenkins focus is shifted entirely to CI (testing and building). ArgoCD takes over CD. It constantly watches your Git repo and PULLS the configuration into the cluster. 🔹 The Benefit: If the cluster state differs from Git, ArgoCD automatically "heals" it. No cluster credentials leave the cluster, making it infinitely more secure. 🔹 Visibility: You get a real-time visual map of your application health directly in the ArgoCD UI. Separating CI (Jenkins) from CD (ArgoCD) isn't just about using new tools; it’s about moving to a declarative, auditable, and secure standard that many enterprise customers are now demanding. #Jenkins #ArgoCD #GitOps #Kubernetes #DevOps #CloudNative #Automation #SoftwareEngineering
To view or add a comment, sign in
-
-
From GitHub → Jenkins → Docker → Kubernetes - complete DevOps workflow. Many people learn DevOps tools individually. But the real value comes from understanding how these tools work together in a real pipeline. Here’s a simplified breakdown of the 𝐞𝐧𝐝-𝐭𝐨-𝐞𝐧𝐝 𝐂𝐈/𝐂𝐃 𝐟𝐥𝐨𝐰 shown in the diagram 𝐂𝐈 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 (𝐁𝐮𝐢𝐥𝐝 & 𝐒𝐜𝐚𝐧) ‣ Developer pushes code to GitHub ‣ Jenkins CI pulls the code and triggers the pipeline ‣ OWASP Dependency Check scans for vulnerable libraries ‣ SonarQube performs code quality & security analysis ‣ Docker builds the image ‣ Trivy scans the image for vulnerabilities ‣ Image is pushed to the registry 𝐂𝐃 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 (𝐃𝐞𝐩𝐥𝐨𝐲) ‣ Jenkins CD updates the image version ‣ Changes pushed back to GitHub ‣ ArgoCD pulls the latest changes ‣ Deploys application to Kubernetes 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 & 𝐀𝐥𝐞𝐫𝐭𝐬 ‣ Prometheus collects metrics ‣ Grafana visualizes dashboards ‣ Email notifications for pipeline status 𝐓𝐡𝐢𝐬 𝐢𝐬 𝐰𝐡𝐚𝐭 𝐜𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬 𝐞𝐱𝐩𝐞𝐜𝐭 𝐲𝐨𝐮 𝐭𝐨 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝: ‣ CI (build + scan) ‣ CD (deploy + automate) ‣ Security (shift-left approach) ‣ Monitoring (production visibility) #devops #cicd #Github #jenkins
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
#cfbr