I deleted a resource from my cluster and Flux put it right back. That was the moment GitOps actually clicked for me. Here is what changed in how I think about infrastructure: Before GitOps, everything was manual. I applied manifests one by one with kubectl, tweaked things directly in the cluster, and had no reliable record of what was actually running or why. After GitOps, my Git repo is my cluster. Flux runs a constant reconciliation loop, checks what is in Git, and makes sure the cluster matches it exactly. Always. The implications of that are huge. ✅ Delete something by accident, Flux restores it. ✅ Merge a bad change, git revert is your rollback. ✅ Want to know what changed and when, check the Git log. ✅ Switch to a new cluster, point Flux at the same repo and it rebuilds everything. The config lives in Git, not in the cluster. That distinction sounds small. It is not. Have you made the shift to GitOps yet? What finally made it click for you? 👇 Follow me if you are building toward a DevOps career the practical way. #GitOps #Kubernetes #DevOps #FluxCD #CloudNative
GitOps Clicked for Me with Flux and Kubernetes
More Relevant Posts
-
🗓️ Day 27/100 — 100 Days of AWS & DevOps Challenge Today's task: a bad commit was pushed to a shared repository. Undo it cleanly. The instinct for many engineers - especially under pressure is to reach for git reset --hard. That's the wrong tool the moment a commit has been pushed to a shared branch. Here's why. git reset rewinds the branch pointer backward, effectively deleting commits from history. Locally, that looks clean. But the remote still has those commits. Now your local master and origin/master have diverged. Git rejects your push. You force push. And now every team member whose local clone was based on those commits has a broken repository. git revert solves this correctly: $ git revert --no-commit HEAD $ git commit -m "revert games" $ git push origin master Instead of deleting the bad commit, it creates a new commit that contains the exact inverse of the bad commit's changes. The bad commit stays in history, it didn't disappear. But HEAD now points to a commit that cancels it out, and the working tree is back to the state before the bad commit was applied. No history rewriting. No force push. No broken clones. Just an auditable record that says "we made a mistake, here's the correction, and when." The --no-commit flag is important here because the task required a specific commit message - "revert games". Without it, Git auto-generates a message like Revert "some commit message". Using --no-commit stages the changes without committing, letting us then git commit -m "revert games" with full control over the message. This exact workflow is what you'd run during a production rollback and why every team's runbook should say git revert, not git reset. Full breakdown on GitHub 👇 https://lnkd.in/gVY8q4u4 #DevOps #Git #VersionControl #GitOps #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #SRE #Rollback #Infrastructure
To view or add a comment, sign in
-
How many commits have you made just to test if something works in the real environment? Push. Wait for the pipeline. It fails. Fix a config. Push again. Wait again. This is what happens when local dev looks nothing like production. Every fix is a commit, every commit is a 10-minute wait, and none of it is feature work. So I built a local dev platform where developers build and test on a real Kubernetes cluster that mirrors production. Same Dockerfile, same manifests, same ingress. - tilt up — see changes in 1 second instead of pushing and waiting - make ci-local — local gitlab pipeline run to catch failures before you push - Push once and it works, not 15 "fix CI" commits I wrote up how I built this. https://lnkd.in/dAQejEUU #Kubernetes #PlatformEngineering #DevOps #Tilt #GitLab
To view or add a comment, sign in
-
🚀 If you're new to DevOps pipelines, here's the simplest way to understand how these 3 tools work together: 🔧 𝐒𝐭𝐞𝐩 1 — 𝐉𝐞𝐧𝐤𝐢𝐧𝐬 (𝐂𝐈) Jenkins watches your Git repo. The moment you push code, it: → Pulls the latest changes → Runs unit tests & security scans → Triggers the next stage automatically No manual clicks. No missed builds. 🐳 𝐒𝐭𝐞𝐩 2 — 𝐃𝐨𝐜𝐤𝐞𝐫 (𝐁𝐮𝐢𝐥𝐝) Jenkins calls Docker to: → Build a container image from your Dockerfile → Tag it with a version (e.g. app:v1.0.3) → Push it to a container registry (AWS ECR, DockerHub) Your app is now portable. Runs the same everywhere. ⎈ 𝐒𝐭𝐞𝐩 3 — 𝐇𝐞𝐥𝐦 (𝐂𝐃) Helm takes that image and deploys it to Kubernetes: → Uses templated charts (no copy-pasting YAML!) → Tracks release versions → Rollback in one command if something breaks Together they form a complete pipeline: 𝑪𝒐𝒅𝒆 → 𝑻𝒆𝒔𝒕 → 𝑩𝒖𝒊𝒍𝒅 → 𝑷𝒂𝒄𝒌𝒂𝒈𝒆 → 𝑫𝒆𝒑𝒍𝒐𝒚 This is the foundation of every modern DevOps workflow — whether you're at a startup or a bank. #Jenkins #Docker #Helm #Kubernetes #CICD #DevOps #CloudNative #DevSecOps #PipelineAutomation #SoftwareEngineering
To view or add a comment, sign in
-
-
Your Kubernetes cluster is lying to you. And you won't find out until prod breaks. Here's a problem most platform engineers don't talk about enough: Config drift across environments. Everything looks identical — dev, staging, prod. Same Helm charts. Same GitOps repo. Same manifests. Then prod goes down. And you spend 3 hours figuring out why staging never caught it. Here's what actually happened: Someone patched a ConfigMap directly on the prod cluster with "kubectl edit" during last month's incident. Just a quick fix. "I'll raise a PR later." They didn't. Now prod is running a config that exists nowhere in Git. Your GitOps tool (ArgoCD, Flux — doesn't matter) shows everything as Synced because drift detection only works if the live state diverges from what's currently in Git. But the patch was never in Git to begin with. This is the gap nobody warns you about: - GitOps doesn't protect you from changes that never entered Git - kubectl diff only compares against what's applied, not what should exist - Multi-cluster setups multiply this problem — 5 clusters, 5 different "versions of truth" - The longer it goes undetected, the harder the blast radius when it surfaces The fix isn't just "don't use kubectl edit" — that battle is already lost in most orgs. The real fix is drift detection as a first-class concern: - Enable ArgoCD's self-heal and prune flags so live state is continuously reconciled - Run kubectl diff in your CI pipeline before every deploy, not just locally - Set up audit logging on your clusters — who ran kubectl commands, and when - Tools like Kyverno or Datree can flag live state mismatches proactively - Treat your cluster state like a database — no manual writes, ever The hardest part isn't the tooling. It's the culture shift of making "I'll fix it in Git later" completely unacceptable. Because in a fast-moving team, "later" is when prod burns. Been burned by config drift before? Drop it in the comments. #Kubernetes #DevOps #PlatformEngineering #GitOps #K8s #SRE #CloudNative
To view or add a comment, sign in
-
Most CI/CD pipelines fail for the same reason — no clear stages. After 4 years in DevOps, here's the multi-stage GitHub Actions pipeline I recommend to every engineer on my team: ━━━━━━━━━━━━━━━━━━━ Stage 1 → Test Stage 2 → Build & tag Docker image Stage 3 → Deploy to Staging Stage 4 → Deploy to Production (with manual approval) ━━━━━━━━━━━━━━━━━━━ 3 things that make this bulletproof: 1️⃣ Use needs: to chain jobs — if tests fail, nothing else runs 2️⃣ Tag images with github.sha — every build is fully traceable 3️⃣ Use GitHub Environments for prod — enforces human approval before anything goes live You don't need a complex tool to do this. A single YAML file in .github/workflows/ is enough to build a production-grade pipeline. Save this post for when you set yours up. What does your CI/CD stack look like? Drop it in the comments 👇 #DevOps #GitHubActions #CICD #Docker #Kubernetes #CloudNative #DevOpsEngineer #SoftwareEngineering
To view or add a comment, sign in
-
GitOps changed how I think about deployments. Here's the mental model: Before GitOps: ❌ SSH into server → pull code → restart service → pray ❌ Jenkins pipeline pushes directly to cluster ❌ "Who deployed what?" — nobody knows After GitOps: ✅ Git is the single source of truth ✅ ArgoCD watches the repo and syncs automatically ✅ Every deployment is a Git commit — auditable, reversible ✅ Multi-cluster? Just point ArgoCD at different directories Key decisions I made: 1. Mono-repo for manifests (simpler than multi-repo for our scale) 2. ArgoCD for app deployments, FluxCD for infra components 3. Automated image tag updates via CI → Git commit → ArgoCD sync If you're starting with GitOps, start with ArgoCD + a single cluster. Don't over-engineer day one. Save this for later ♻️ #GitOps #ArgoCD #FluxCD #Kubernetes #DevOps #EKS #Kubernetes #AWS #CICD #PlatformEngineering #GitOps #Terraform #ArgoCD #CloudEngineering #SRE #DevSecOps #BackstageIO #InfrastructureAsCode #GitHub #Docker #DevOpsCommunity #TechCareers #LearningInPublic #BuildInPublic
To view or add a comment, sign in
-
-
A lot of CI/CD pipelines still push deployments to Kubernetes. But GitOps flips the model. Instead of your pipeline talking to the cluster, the cluster pulls from Git and keeps itself in sync. That’s exactly what this FluxCD workflow shows 👇 🔹 You push code → CI builds → image goes to registry 🔹 Flux detects changes (Git + registry) 🔹 It updates manifests and reconciles the cluster automatically No direct access from CI to production. No kubectl in pipelines. Why this matters: • Git becomes the single source of truth • Every deployment is auditable and reversible • Drift is automatically corrected • Your cluster is always aligned with what’s declared This “pull-based” model is what makes GitOps powerful and honestly, safer by design. If your pipeline is still doing direct deploys to the cluster, this is a pattern worth rethinking. #GitOps #FluxCD #Kubernetes #DevOps #PlatformEngineering
To view or add a comment, sign in
-
-
I've used GitHub Actions, GitLab CI, and Azure DevOps. Here's why I keep coming back to ArgoCD for production deployments — and when I don't. The problem with push-based CI/CD: Your pipeline pushes to the cluster. If something drifts — a manual kubectl apply, a failed rollback, a config change — your pipeline doesn't know. Your cluster is now lying to you. ArgoCD flips this. The cluster pulls from Git. Git is truth. If your cluster doesn't match Git, ArgoCD self-heals. You get drift detection out of the box. What I love about it in practice: → Rollbacks are just Git reverts — no pipeline magic needed → Every deploy is visible, auditable, and reproducible → Works beautifully with Helm + Kustomize → The UI is genuinely useful for oncall When I DON'T use ArgoCD: → Stateful apps with complex DB migrations (timing matters) → Very small teams where GitOps overhead > benefit → Non-Kubernetes workloads (wrong tool for the job) The honest take: ArgoCD isn't magic. It requires discipline in how you structure your repos. But once it clicks, going back to push-based deployments feels like deploying by FTP. Have you made the GitOps shift? What broke when you did? #ArgoCD #GitOps #Kubernetes #CICD #DevOps ☢️
To view or add a comment, sign in
-
-
GitOps: Why I Stopped Running kubectl Manually A while back I made a rule for myself: no more manual kubectl apply in production. Ever. It felt uncomfortable at first. Like giving up control. But the reality is — it was the opposite. Once we moved to a full GitOps workflow with ArgoCD, every change became: — Versioned in Git — Reviewed via pull request — Automatically synced to the cluster — Fully auditable Rollbacks went from a 30-minute fire drill to a simple git revert. Deployment confidence went through the roof. And the best part? Teams that previously depended on the "infra guy" could now self-serve their own deployments safely. GitOps is not just a deployment strategy. It's a cultural shift — from "who did what and when" to "the repo is the single source of truth." If you're still doing manual deployments, try this: pick one non-critical service and move it to GitOps. See how it feels. You probably won't go back. #GitOps #ArgoCD #Kubernetes #DevOps #ContinuousDelivery #SRE
To view or add a comment, sign in
-
🔄 Moving towards GitOps with FluxCD — my recent learning Over the last few weeks, I’ve been exploring FluxCD and how it enables a true GitOps workflow for Kubernetes. One thing that stood out is how cleanly it shifts deployments from “push-based” CI/CD to a “pull-based” model — where the cluster continuously syncs with the desired state defined in Git. 🔹 What I found valuable: ✔ Declarative deployments using Git as the single source of truth ✔ Automatic reconciliation — cluster self-heals if drift occurs ✔ Seamless integration with Helm and Kustomize ✔ Better auditability and version control of infrastructure changes 🔹 Simple workflow: 1. Define manifests (or Helm charts) in Git 2. FluxCD watches the repo 3. Changes are automatically applied to the cluster This approach reduces manual intervention and improves reliability, especially in multi-environment setups. Currently experimenting with: * FluxCD + HelmRelease * Multi-cluster GitOps setups * Secrets management approaches Would be great to hear from others: 👉 Are you using FluxCD or ArgoCD in production? What challenges have you faced? #DevOps #Kubernetes #FluxCD #GitOps #Azure #CloudEngineering
To view or add a comment, sign in
Explore related topics
- Kubernetes Cluster Validation Best Practices
- Kubernetes Cluster Separation Strategies
- How Kubernetes Enables Seamless Infrastructure Management
- How to Automate Kubernetes Stack Deployment
- Kubernetes and Application Reliability Myths
- How to Stabilize Kubernetes Clusters
- How to Streamline Kubernetes Cluster Setup
- Managing Kubernetes Cluster Edge Cases
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development