You deleted the Deployment from Git. kubectl apply left it running anyway. kubectl apply only acts on resources passed to it. It has no model of what used to live in your namespace. last-applied-configuration tracks prior config of one object, not set membership. Server-side apply uses managed fields. Same scope. Git and cluster state drift. Silently. Three ways to fix it: 1. kubectl apply --prune -l app=myapp -f ./manifests Legacy allowlist mode, still alpha. Must apply all manifests at once. One missing label and you nuke prod. 2. KUBECTL_APPLYSET=true kubectl apply --prune --applyset=<name> ApplySet-based pruning. Still alpha. 3. GitOps with pruning on (Argo CD prune: true, Flux .spec.prune: true) Controller owns the diff. Removed from source equals removed from cluster. Off by default. Turn it on. A YAML manifest is desired state in theory. kubectl apply enforces it only for what's listed, not for what's missing. For true declarative ops, use a controller that reconciles the full set. 700+ DevOps Engineers read about outages in their inbox (instead of discovering them in Slack at 2 AM). 👉 https://lnkd.in/gvNrXSYK #Kubernetes #DevOps #GitOps #SRE #PlatformEngineering
Fixing Deployment Drift with kubectl apply
More Relevant Posts
-
Hi everyone, I’m Victorraj! 👋 I’m excited to share a project I’ve been working on: a fully automated, production-ready DevOps CI/CD pipeline. My goal was to build a system that is not only scalable but also ensures 100% availability during deployments. 🛠️ The Technical Stack: 🔹 Backend: Node.js Express (secured with Helmet.js and logged with Morgan). 🔹 Testing: Automated unit testing using Jest and Supertest. 🔹 Infrastructure: Docker multi-stage builds for secure, lightweight production images. 🔹 Orchestration: Docker Swarm (Configured for Zero-Downtime Rolling Updates). 🔹 Proxy: Nginx Alpine (Reverse proxy with custom security headers). 🔹 CI/CD: GitHub Actions for a seamless "Push-to-Deploy" experience. 🔄 The Automation Workflow: 1️⃣ Continuous Integration: Every push to main triggers a GitHub Action that runs the Jest test suite to ensure code quality. 2️⃣ Containerization: Upon success, a production image is built and pushed to Docker Hub. 3️⃣ Continuous Deployment: The pipeline connects to the server via SSH, pulls the latest image, and triggers a docker stack deploy. 4️⃣ Zero Downtime: Using Docker Swarm’s start-first update order, the new version is launched and verified before the old one is retired—zero lag for the user! Building this helped me master the intricacies of automated infrastructure and high-availability architecture. I believe that a great developer doesn't just write code—they ensure it reaches the user reliably. 📂 Check out the code here: [Insert Your GitHub Link Here] I’d love to connect with fellow DevOps enthusiasts and engineers! What are your favorite tools for managing production pipelines? #DevOps #Victorraj #CICD #Docker #NodeJS #GithubActions #SoftwareEngineering #Automation #CloudComputing #Nginx
To view or add a comment, sign in
-
-
🗓️ Day 29/100 — 100 Days of AWS & DevOps Challenge Today's task wasn't just Git — it was the full engineering team workflow that makes collaborative development actually safe. The requirement: Don't let anyone push directly to master. All changes must go through a Pull Request, get reviewed, and be approved before merging. This is branch protection in practice. Here's the full cycle: Step 1 — Developer pushes to a feature branch (already done) $ git log --format="%h | %an | %s" # Confirms user commit, author info, commit message Step 2 — Create the PR (Log into GIT) - Source: story/fox-and-grapes - Target: master - Title: Added fox-and-grapes story - Assign a user as reviewer Step 3 — Review and merge (log into GIT as reviewer) - Files Changed tab — read the actual diff - Approve the PR - Merge into master Master now has user story. And there's a full audit trail of who proposed it, who reviewed it, who approved it, and when it merged. Why this matters beyond the task: - A Pull Request is not a Git feature - it's a platform feature. Git only knows commits and branches. The PR is a Git/GitHub/GitLab construct that adds review, discussion, approval tracking, and CI/CD status checks on top of a branch merge. When companies say "we require code review before anything goes to production," this is the mechanism. When GitHub Actions or GitLab CI runs tests on every PR — this is where that hooks in. When a security audit asks "who approved this change?" — the PR has the answer. The workflow is identical across Git, GitHub, GitLab, and Bitbucket: push branch → open PR → assign reviewer → review diff→ approve → merge → master updated → branch deleted Full PR workflow breakdown on GitHub 👇 https://lnkd.in/gpi8_kAF #DevOps #Git #PullRequest #CodeReview #Gitea #BranchProtection #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #GitOps #TeamCollaboration
To view or add a comment, sign in
-
🔧 **Debugging CI Pipelines: The Missing Guide for Contributors** Most tutorials show you how to *set up* CI pipelines—but few teach you how to *debug* them when they break across multiple layers. If you've ever contributed to open source and hit a mysterious CI failure, you know the pain. Here’s what experienced engineers do when pipelines go wrong: 🔍 **Start from the bottom** – Check the runner logs first, not the application logs. Often the issue is environment-related (missing dependencies, wrong OS version, or cache corruption). 🔄 **Reproduce locally** – Use the same Docker image or environment spec as the CI runner. Tools like `act` (for GitHub Actions) or `gitlab-runner exec` can simulate the pipeline locally. 📦 **Check layer boundaries** – Failures often happen at integration points: between build and test stages, or when artifacts are passed between jobs. Verify file paths, environment variables, and permissions. 🛑 **Pin versions** – A common hidden culprit: a dependency updated between runs. Lock your package manager versions and use exact image tags. 💡 **Pro tip**: Add debug steps to your pipeline (like `ls -la`, `env`, or `pwd`) temporarily. Remove them once fixed. Debugging CI is a skill that separates good contributors from great ones. Next time your PR fails, don't just retry—investigate. #ContinuousIntegration #DevOps #OpenSource #SoftwareEngineering #Debugging
To view or add a comment, sign in
-
If you've ever had to stare at a GitLab CI spinner for 30 minutes just for a typo fix, you know the pain. I got fed up with a bloated frontend deployment pipeline choking our productivity. It relied on heavy Webpack builds and fragile background processes. So, we tore it down and rebuilt it using Node 24, Vite, artifact-based deployments, and PM2. The damage? - Build times dropped from 30 minutes to 2 minutes - 95 hours of CI runner time saved every single month - Zero manual port cleanup required Just because a script works doesn't mean you shouldn't rethink it. I put together a quick write-up of the engineering decisions we made to make this happen, along with the YAML configs. Check out the full article here: https://lnkd.in/daWe8hQY Warning - Title could be clickbaity but mathematically true #DevOps #PlatformEngineering #TechDebt #Vite #GitLab
To view or add a comment, sign in
-
GitOps: Why I Stopped Running kubectl Manually A while back I made a rule for myself: no more manual kubectl apply in production. Ever. It felt uncomfortable at first. Like giving up control. But the reality is — it was the opposite. Once we moved to a full GitOps workflow with ArgoCD, every change became: — Versioned in Git — Reviewed via pull request — Automatically synced to the cluster — Fully auditable Rollbacks went from a 30-minute fire drill to a simple git revert. Deployment confidence went through the roof. And the best part? Teams that previously depended on the "infra guy" could now self-serve their own deployments safely. GitOps is not just a deployment strategy. It's a cultural shift — from "who did what and when" to "the repo is the single source of truth." If you're still doing manual deployments, try this: pick one non-critical service and move it to GitOps. See how it feels. You probably won't go back. #GitOps #ArgoCD #Kubernetes #DevOps #ContinuousDelivery #SRE
To view or add a comment, sign in
-
Everyone uses Git. Few people understand it. Interviewers can tell the difference. 📅 Day 9/30 — Git Internals & Branching Strategies 🔍 Git Object Model (this is what's happening under the hood) blob → stores file content tree → directory snapshot (contains blobs + other trees) commit → points to a tree + parent commit + author + message tag → points to a commit with a name Everything is content-addressable: SHA-1 hash of the content = the object's name. 📍 Git Refs HEAD → where you are right now (usually a branch name) Branch → just a pointer to a commit — it moves forward with each commit Tag → a fixed pointer to a commit (annotated tags store extra metadata) ORIG_HEAD → previous HEAD before a merge/rebase (use to undo) 🌿 Branching Strategies GitFlow (mature products, scheduled releases): main → always production-ready develop → integration branch feature/* → branch off develop release/* → stabilization before merge to main hotfix/* → emergency fixes directly off main Trunk-Based Development (CI/CD-first, recommended for SRE teams): Everyone commits to main (trunk) daily Feature flags hide incomplete work Short-lived branches (max 1-2 days) 🔀 Merge vs Rebase vs Squash Merge → preserves history, creates merge commit (safe, clear history) Rebase → rewrites commits onto target branch (clean linear history, risky on shared branches) Squash merge → collapses all PR commits into one (clean main branch, loses granular history) Golden rule: Never rebase a shared/public branch. 🛠️ SRE Git power tools git reflog → every change to HEAD; your recovery lifeline git bisect → binary search to find which commit introduced a bug git cherry-pick → apply a specific commit to another branch (hotfix pattern) 🎯 Interview question to practice: "You ran git reset --hard and lost your work. How do you recover?" → git reflog → find the commit SHA before the reset → git checkout -b recovery Day 10 tomorrow: Terraform Fundamentals #Git #DevOps #SRE #30DayDevOps #GitFlow #TrunkBased
To view or add a comment, sign in
-
GitHub Actions vs Jenkins — if you can't explain the difference, DevOps interviews will be tough. Picture this: you push your code and want it to build, test, and deploy automatically. That's CI/CD. And for years, Jenkins was the tool everyone reached for. Install it. Configure pipelines. Manage plugins. Done — your automation runs. But here's the catch nobody tells you upfront: Maintaining Jenkins becomes a full-time job on its own. → Server setup and hosting → Plugin updates and conflicts → Manual scaling → Constant maintenance overhead Then came GitHub Actions. No server. No setup. No headache. You write a YAML file inside your repo — and your pipeline is live. Push code → workflow triggers → build, test, deploy. Done. 🚀 But is Jenkins dead? Not even close. Jenkins wins when: → Full control is needed → Complex custom pipelines → On-premise environments GitHub Actions wins when: → Speed and simplicity matter → Cloud-native workflows → Tight GitHub integration So when someone asks "GitHub Actions vs Jenkins?" in an interview — the right answer isn't a winner. It's about use case, team size, and scale. Knowing both — and when to use which — is what separates a good DevOps engineer from a great one. As I transition into AI/ML, understanding CI/CD deeply is becoming even more critical — because ML pipelines need the same rigour. Which do you use at work — Jenkins or GitHub Actions? Drop it below 👇 #DevOps #GitHubActions #Jenkins #CICD #CloudEngineering #Automation #MLOps #BuildInPublic #DevOpsInterview
To view or add a comment, sign in
-
-
Collabora’s contributions to Git 2.54 (and the upcoming 2.55) bring config-based hooks with better visibility, opt-in parallel hook execution, and safer submodule handling through path-collision fixes. Check out the new features: https://col.la/git254 #Git #OpenSource #DevTools #DevOps
To view or add a comment, sign in
-
Why does Jenkins still power pipelines at some of the world's largest engineering teams — 16 years after its release? Every team hits the same wall: — Manual deployments that don't scale — Forgotten test runs before production pushes — Staging environments drifting from prod That's when CI/CD stops being a buzzword and becomes an operational necessity. Here's why Jenkins remains the go-to answer: - Self-hosted & open-source — your infrastructure, your control - Language-agnostic — Java, Python, Go, Bash? Jenkins doesn't care - Master-agent architecture — scale builds across bare metal, Docker, or Kubernetes - 1,800+ plugins — integrates with virtually any tool in your stack - Zero per-minute build costs at high volume For regulated industries, financial services, or security-conscious teams, Jenkins isn't just a preference — it's often the only choice that fits compliance requirements. SaaS CI tools like GitHub Actions or GitLab CI are great. But when your code can't leave your network, when pipelines get complex, when you need complete control — Jenkins wins. I just published a deep-dive on what Jenkins really is, how its architecture works, and when to choose it over the modern SaaS alternatives. Read the full article here: https://lnkd.in/gGak92Ta This is Part 1 of a 3-part series — next up: Installing Jenkins on a Linux server the right way (secure, Nginx-proxied, never exposed to the public internet). Follow along if you're building or leveling up your DevOps stack. #Jenkins #DevOps #CICD #Automation #SoftwareEngineering #CloudComputing #DevSecOps #OpenSource
To view or add a comment, sign in
-
Most CI/CD pipelines fail for the same reason — no clear stages. After 4 years in DevOps, here's the multi-stage GitHub Actions pipeline I recommend to every engineer on my team: ━━━━━━━━━━━━━━━━━━━ Stage 1 → Test Stage 2 → Build & tag Docker image Stage 3 → Deploy to Staging Stage 4 → Deploy to Production (with manual approval) ━━━━━━━━━━━━━━━━━━━ 3 things that make this bulletproof: 1️⃣ Use needs: to chain jobs — if tests fail, nothing else runs 2️⃣ Tag images with github.sha — every build is fully traceable 3️⃣ Use GitHub Environments for prod — enforces human approval before anything goes live You don't need a complex tool to do this. A single YAML file in .github/workflows/ is enough to build a production-grade pipeline. Save this post for when you set yours up. What does your CI/CD stack look like? Drop it in the comments 👇 #DevOps #GitHubActions #CICD #Docker #Kubernetes #CloudNative #DevOpsEngineer #SoftwareEngineering
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development