Fixing Deployment Drift with kubectl apply

You deleted the Deployment from Git. kubectl apply left it running anyway. kubectl apply only acts on resources passed to it. It has no model of what used to live in your namespace. last-applied-configuration tracks prior config of one object, not set membership. Server-side apply uses managed fields. Same scope. Git and cluster state drift. Silently. Three ways to fix it: 1. kubectl apply --prune -l app=myapp -f ./manifests Legacy allowlist mode, still alpha. Must apply all manifests at once. One missing label and you nuke prod. 2. KUBECTL_APPLYSET=true kubectl apply --prune --applyset=<name> ApplySet-based pruning. Still alpha. 3. GitOps with pruning on (Argo CD prune: true, Flux .spec.prune: true) Controller owns the diff. Removed from source equals removed from cluster. Off by default. Turn it on. A YAML manifest is desired state in theory. kubectl apply enforces it only for what's listed, not for what's missing. For true declarative ops, use a controller that reconciles the full set. 700+ DevOps Engineers read about outages in their inbox (instead of discovering them in Slack at 2 AM). 👉 https://lnkd.in/gvNrXSYK #Kubernetes #DevOps #GitOps #SRE #PlatformEngineering

  • diagram

To view or add a comment, sign in

Explore content categories