When I started this new and exciting project at work, I suddenly found myself juggling multiple Kubernetes clusters. Some EKS, some shared, and many with the same generic context names. If you've been there, you know the pain. Merging kubeconfigs should be simple. It's not. The built-in kubectl approach silently drops duplicate entries. No warning. No error. Your cluster access just… vanishes. I'd been bitten by this more than once. So I looked at what was out there kubecm, kubectx, konfig - solid tools, but none of them let me rename clusters and contexts on import while also backing up my config automatically. I kept falling back to a fragile multi-step manual process. Eventually I thought: if this tool doesn't exist, I'll build it. That's how konfuse was born. A single-binary open-source CLI tool (written in Go) that merges kubeconfig files with rename-on-import and automatic backup. One command, no runtime dependencies. konfuse eks-staging.yaml --rename-context staging --rename-cluster eks-staging It also lists your contexts at a glance and cleans up orphaned entries when you delete a context things that take multiple kubectl commands otherwise. I built it to solve my own problem, then decided to make it something others could use too. I wrote up the full story, the problem, what exists today, and how konfuse works in a blog post. Links to the article and the Github are in the comments 👇 If you work with multiple Kubernetes clusters, give it a try and let me know what you think. Stars, feedback, and issues are all welcome! #Kubernetes #DevOps #CLI #OpenSource #Go #KubeConfig
Managing Multiple Kubernetes Clusters with Konfuse
More Relevant Posts
-
Day 44 of #100DaysOfDevOps I deployed a Kubernetes pod, got the YAML "right" — and still failed the task. We needed a pod that prints environment variables on startup — GREETING, COMPANY, and GROUP — using a bash image with a one-shot command and restartPolicy: Never. Simple enough. I wrote the manifest, triple-checked the env values, the command syntax, the restart policy. Looked clean. It wasn't. The pod name was print-envars-greeting. And I'd used the same name for the container too — out of habit. The requirement said the container name should be print-env-container. One field, buried in spec, completely overlooked. Changed it, reapplied, done. spec: restartPolicy: Never containers: - name: print-env-container image: bash command: ["/bin/sh", "-c", 'echo "$GREETING $COMPANY $GROUP"'] env: - name: GREETING value: "Welcome to" - name: COMPANY value: "xFusionCorp" - name: GROUP value: "Group" Kubernetes doesn't care that your logic is right. It cares that your spec matches exactly. In real clusters, a wrong container name breaks log queries, metric scraping, and sidecar injection — silent failures that surface at the worst time. Read the requirements like a contract. Every field is a clause. #Kubernetes #DevOps #KodeKloud #CloudEngineering #LearningInPublic
To view or add a comment, sign in
-
⚛️ Helm is great. Until it isn't. You start with 2 charts. Then 5. Then 15 microservices, 3 environments, 2 clusters, and a bash script held together with hope. That bash script IS your deployment system. And nobody wants to touch it. I went deep on Helmfile — the declarative orchestration layer that sits above Helm and gives you what Helm was never designed to provide: → One `helmfile apply` to sync your entire platform → `helmfile diff` — see exactly what changes BEFORE it hits prod → `needs:` — dependency ordering with a DAG, not guesswork → Environment-aware values without duplicating configs → SOPS + Vault native secret management → Kustomize, raw YAML, hooks — all as Helm releases The part that changed how I think about deployments: Helmfile uses a two-pass rendering engine. ⭐ Pass 1 resolves your environment values. ☀️ Pass 2 re-renders the entire state file with that context — which means your release names, value file paths, and chart versions can all be dynamically constructed per environment. Template your templates. And `helmfile show-dag` will print your entire execution graph — which releases run in parallel, which wait for dependencies — before you run anything. If you're managing Helm at scale, this is the missing control plane. Full technical breakdown in the blog https://lnkd.in/gXcn4BVU #Kubernetes #Helm #DevOps #GitOps #Platform Engineering #SRE #CloudNative
To view or add a comment, sign in
-
Dear Kubernetes expert, Yes, we all know Deployments manage ReplicaSets. But when was the last time you directly interacted with a ReplicaSet? A ReplicaSet ensures a specified number of pod replicas are running at any given time. But here’s the catch, most of us never create them manually because Deployments abstract them away. So why should you care? 👇 🔹 Debugging: Ever noticed orphaned ReplicaSets lingering after updates? Understanding their lifecycle is key. 🔹 Custom Controllers: Building advanced patterns like canary or blue/green deploys sometimes requires ReplicaSet-level control. 🔹 Rollbacks: They rely on ReplicaSets, especially when tracking Deployment history. 🔹 Fine-grained Management: Need precise control over selector behavior? ReplicaSets give you that flexibility, Deployments enforce immutability of selectors for stability. Tip: Want to observe how a Deployment handles ReplicaSets? Run a rollout and watch the creation of new ReplicaSets while old ones get scaled down. ______________________________________________________ 🔁 If you found this useful, repost to help others find it, sharing is caring. 👨💻 Tag someone learning anything and everything Cloud-Native, Kubernetes & MLOps. 💾 Save this post for future reference. I post daily insights here, and break things down deeper in my weekly newsletter. Subscribe to stay updated. ______________________________________________________ #Kubernetes #DevOps #CloudNative #ReplicaSet #K8sTips #SRE #hellodeolu #learnin
To view or add a comment, sign in
-
-
Dear Kubernetes expert, Yes, we all know Deployments manage ReplicaSets. But when was the last time you directly interacted with a ReplicaSet? A ReplicaSet ensures a specified number of pod replicas are running at any given time. But here’s the catch, most of us never create them manually because Deployments abstract them away. So why should you care? 👇 🔹 Debugging: Ever noticed orphaned ReplicaSets lingering after updates? Understanding their lifecycle is key. 🔹 Custom Controllers: Building advanced patterns like canary or blue/green deploys sometimes requires ReplicaSet-level control. 🔹 Rollbacks: They rely on ReplicaSets, especially when tracking Deployment history. 🔹 Fine-grained Management: Need precise control over selector behavior? ReplicaSets give you that flexibility, Deployments enforce immutability of selectors for stability. Tip: Want to observe how a Deployment handles ReplicaSets? Run a rollout and watch the creation of new ReplicaSets while old ones get scaled down. ______________________________________________________ 🔁 If you found this useful, repost to help others find it, sharing is caring. 👨💻 Tag someone learning anything and everything Cloud-Native, Kubernetes & MLOps. 💾 Save this post for future reference. I post daily insights here, and break things down deeper in my weekly newsletter. Subscribe to stay updated. ______________________________________________________ #Kubernetes #DevOps #CloudNative #ReplicaSet #K8sTips #SRE #hellodeolu #learnin
To view or add a comment, sign in
-
-
The pipeline was green. Deployment said successful. Production was running the wrong code for 3 hours. No alerts. No red dashboards. Nothing. I've been burned by silent CI/CD failures more times than I'd like to admit. The dangerous ones aren't the crashes — they're the failures that look like success. Here are the 3 that hurt the most: 1. Docker cached the wrong image Build finished in 12 seconds. Felt fast. Turned out Docker served a previously cached layer. Yesterday's code went to production. The build log looked completely normal. 2. Tests reported zero failures — because they never ran The test framework found no matching files, ran zero tests, exited with code 0. Green badge. A real bug reached production that the tests should have caught. 3. Deployment succeeded. Old code still running. Kubernetes rollout reported complete. New image never actually pulled — node had the old one cached with imagePullPolicy: IfNotPresent. "Deployment succeeded" and "new code is live" are not the same thing. The root cause in every case was the same: The pipeline verified that steps executed — not that outcomes were correct. The fixes aren't complex: → Embed Git SHA in every image. Verify it post-deploy → Fail the pipeline if zero tests ran → Never use :latest in Kubernetes. Always deploy with image SHA I wrote the full breakdown with code examples for GitHub Actions, Jenkins, and Kubernetes on Dev.to. Link in the comments 👇 Have you hit a silent pipeline failure? Drop it below — genuinely curious what broke. #DevOps #CICD #Docker #Kubernetes #Jenkins #SRE #PlatformEngineering #Cloud
To view or add a comment, sign in
-
Ever faced this in Kubernetes? 👇 Everything was working fine yesterday… Today, something feels off. No crashes. No alerts. But things are breaking. 👉 Requests failing 👉 Latency increasing 👉 Random issues showing up And the worst part? No one knows what changed. This is what I call ⚙️ Configuration Drift Small changes like: • Env variable updates • ConfigMap tweaks • Secret rotations • Partial deployments Individually harmless… But together → production issues 💬 Curious - how do you debug this today? Because most teams: → Compare configs manually → Check logs (no clear answer) → Spend hours guessing That’s exactly why I built KubeGraf: 👉 Tracks every config & deployment change 👉 Correlates it with system issues 👉 Pinpoints what changed & why it broke 👉 Suggests safe rollback or fix Instead of “what went wrong?” You get → “this change caused the issue” 💡 https://kubegraf.io #Kubernetes #DevOps #CloudNative #K8s #SRE #Debugging #Observability #IncidentResponse #RootCauseAnalysis #Microservices #KubeGraf #DevTools
To view or add a comment, sign in
-
I have built an open-source tool: kblame for Kubernetes — it answers the common question "What changed?" Here is why. From my experience and from researching the incident response space, I found that 80% of K8s outages trace to recent changes (Komodor, 2025). Yet when an alert fires, finding what changed means stitching together: - kubectl events - ArgoCD sync history - Slack threads and guesswork kblame gives you a unified change timeline across common Kubernetes resource types — in a single command, in seconds. This is the fastest way I've found to go from alert to initial understanding. How this is different: tools like kubectl, Grafana, or timeline UIs show parts of the story. kblame reconstructs a cross-resource, ordered timeline of changes — the missing layer between raw events and incident understanding. Design principles: - single binary - client-side only (no agents) - works with existing kubeconfig Where this is headed: - V1 (current): Unified change timeline across all resource types - V2: What's failing — logs, confirmed alert correlation - V3: How things connect — dependency-aware correlation, persistent history via in-cluster controller - V4: Multi-cluster, web UI, learned patterns - V5: Ideas fermenting... If you've dealt with incidents in Kubernetes, I'd value your feedback. Download the binary or clone from source — try it against your cluster. GitHub: https://lnkd.in/gWaKuPgf #kubernetes #devops #sre #opensource #platformengineering #EKS #GKE #AKS
To view or add a comment, sign in
-
-
Stop getting stuck with "stale" code in Kubernetes! 🐳 ⛴️ One of the most common "why isn't my code updating?" bugs in K8s comes down to a simple setting: imagePullPolicy: IfNotPresent. If you're using mutable tags (like :latest or :dev), here’s what happens: - You push a new image to the registry. - You restart your Pod. - Kubernetes sees the tag already exists on the node. - It skips the pull and runs your old code. 🤦♂️ Here is the quick fix guide: ✅ Use imagePullPolicy: Always for development. It doesn't actually download the whole image every time—it just checks the registry for a new digest. If nothing changed, it uses the cache. ✅ Use Immutable Digests in Production. Instead of my-app:v1, use my-app@sha256:[hash]. This ensures every single node is running the exact same bits, regardless of the pull policy. ✅ Use Versioned Tags. Avoid :latest. Use unique tags like :v1.0.1 or the Git commit hash. When the tag changes, IfNotPresent works perfectly because the new tag won't be on the node yet. Don't let a cached image trick you into thinking your bug fix didn't work! #Kubernetes #DevOps #CloudNative #Docker #SoftwareEngineering #K8sTips
To view or add a comment, sign in
-
-
I spent more time than I want to admit staring at Kubernetes YAML wondering how a config file gets inside a running container. ConfigMap. Volume. VolumeMount. Three things. No one explains how they actually connect. Here is what clicked for me. The ConfigMap is the letter. Your config data lives here. The Volume is the envelope. It wraps the ConfigMap and gives it a name Kubernetes can carry. The VolumeMount is the delivery address. It tells Kubernetes exactly where inside the container to place that file. By the time your container starts, the file is already sitting there waiting. The container has no idea any of this happened. It just sees a file at a path and reads it. The three things that need to match exactly: 1) The ConfigMap name must match what the Volume points to. 2) The Volume connector name must match what the VolumeMount references. 3) The file path in the VolumeMount must match what your app expects to find. Get those three right and it just works every time. What Kubernetes concept took you the longest to understand? 👇 Follow me, I am documenting everything I build and learn in my home lab. #Kubernetes #DevOps #CloudNative #Homelab #DevOpsLearning
To view or add a comment, sign in
-
Thinking of switching from Docker to Podman? Here’s what actually changes. Podman is a daemonless, rootless container engine that mimics Docker’s CLI while adding native pod support and easy Kubernetes YAML export. This post walks through architecture, security differences, command parity, and practical migration tips. Key takeaways: - 🔄 Daemonless model — no single privileged dockerd process to fail. - 🔐 Rootless execution — containers can run without root, reducing host risk. - 🧩 Native pods — bundle containers like Kubernetes and export manifests. - ⚙️ CLI-compatible + systemd integration — drop-in replacement for many workflows. Read it to decide if Podman fits your security and Kubernetes-first workflows. Read more: https://lnkd.in/em8-hFqq #Podman #Docker #Kubernetes #Containers #CloudNative
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Full article: https://medium.com/@chameerar/merging-kubeconfig-files-without-losing-your-sanity-5fb9a30ec938