It is time again: flawless Kubernetes update, done. Control plane and all worker nodes are Ready, the cluster is healthy, and the rollout to v1.35.4 went exactly as planned. Moments like this are a good reminder that solid infrastructure is built on consistency, preparation, and repeatable processes. Quiet upgrades are the best upgrades. #Kubernetes #DevOps #CloudEngineering #CloudNative #SRE #PlatformEngineering
Kubernetes Update to v1.35.4 Complete
More Relevant Posts
-
I recently ran into a real-world Kubernetes scenario that highlights how small configuration changes can impact availability. After applying a taint to a node, one of my pods went down. At first glance, it seemed like an issue with the cluster, but the root cause was straightforward — the pod didn’t tolerate the taint, so it was evicted. This raised an important question: how do you ensure zero downtime in such situations? Here’s the approach I follow in production: Use Deployments instead of single pods, with at least two replicas Configure rolling updates with maxUnavailable set to 0 Add tolerations where workloads are expected to run on tainted nodes Apply PodDisruptionBudgets to maintain minimum availability Use readiness probes to ensure only healthy pods receive traffic The key takeaway is simple: High availability is not something you enable later — it has to be designed from the beginning. If your setup still relies on a single pod, downtime is just a matter of time. #Kubernetes #DevOps #SRE #CloudNative #ZeroDowntime
To view or add a comment, sign in
-
-
Stop overcomplicating Kubernetes cost optimization — right-sizing pods and autoscaling strategies. I've reviewed hundreds of implementations. The best ones? Dead simple. The pattern: - Start with the boring solution - Measure actual bottlenecks - Only then add complexity Premature optimization is real, and it kills projects. What's the simplest solution you've shipped that just worked? #DevOps #CloudComputing #Kubernetes
To view or add a comment, sign in
-
Monday Breaks ⚡️ (A small DevOps tip for the week) Ingress had a long run. But Kubernetes networking is moving on. For years, many teams treated Ingress as the default answer. Now the direction is getting clearer: Gateway API is becoming the new center of gravity for Kubernetes traffic management. This shift is not just about replacing one resource with another. It is about better role separation, more flexible routing, and a cleaner future for platform teams. 🔥 Hot take: If your Kubernetes traffic strategy still ends at Ingress, you are already planning behind the curve. Is your team still all-in on Ingress, or already thinking in Gateway API? #Kubernetes #GatewayAPI #Ingress #DevOps #PlatformEngineering #CloudNative #SRE #K8s
To view or add a comment, sign in
-
-
Watching Kubernetes manage workloads feels like observing a city that never sleeps — traffic reroutes itself, failures heal silently, and growth happens without disruption. It’s not just infrastructure; it’s a system that thinks ahead. #kubernetes #DevOps
To view or add a comment, sign in
-
🚀 Post 1: Kubernetes Journey When you start learning Kubernetes, everything feels simple… ➡️ Run a container ➡️ Create a Pod ➡️ Deploy your first app But give it some time… ⏳ You’ll soon be navigating: 🔹 Multi-cluster architectures 🔹 Service meshes 🔹 Auto-scaling & observability 🔹 CI/CD integrations 🔹 Security & networking complexities From Single Pod ➝ Multi-Cluster Management, the journey is real 😄 💡 The key? Don’t rush. Master the basics, then scale your knowledge step by step. #Kubernetes #DevOps #CloudComputing #LearningJourney #Containers #PlatformEngineering
To view or add a comment, sign in
-
-
Kubernetes can feel complicated fast. This new video from Kat Zivkovic at Sysdig makes it easier to understand by breaking down the basics in under a minute. It covers: 🔹 What Kubernetes is 🔹 How it helps manage containers 🔹 Why concepts like clusters, nodes, pods, and the control plane matter A helpful watch for anyone learning the foundations of cloud-native infrastructure. Watch the video 🔗 https://okt.to/kWjRw4 #Kubernetes #CloudNative #Containers #DevOps #DevSecOps
What is Kubernetes?
To view or add a comment, sign in
-
☸️ Kubernetes v1.36.0 is out — and this is a major release with real changes you should pay attention to. This isn’t just “new features” — it’s also about what’s changing or going away. ⚠️ One important change: The long-deprecated gitRepo volume has finally been removed. 👉 If you still rely on it, workloads will break after upgrading — and you’ll need to migrate to alternatives like init containers or external sync tools. (Kubernetes) ✨ On the feature side: • Mutating Admission Policies → now stable (less reliance on webhooks) • User Namespaces → improved isolation for containers • Dynamic Resource Allocation (DRA) → continues evolving for advanced workloads These are the kinds of changes that impact: • security posture • workload isolation • cluster extensibility Kubernetes v1.36 is a good reminder: 👉 Major releases are not just about what’s new — they’re about what might break and what needs migration. At Relnx, we track these changes so you can quickly understand: ✅ breaking changes ✅ new capabilities ✅ upgrade impact 🔎 Full release breakdown: https://lnkd.in/g3PEecwm For platform teams — What’s your first step when a new Kubernetes version drops: 👉 Check features 👉 Or check breaking changes first? #Kubernetes #CloudNative #SRE #DevOps #PlatformEngineering #Relnx
To view or add a comment, sign in
-
-
DevOps Concept of the Day: Kubernetes Basics Kubernetes automates container deployment, scaling, and self-healing. Core objects: Pod, Deployment, Service, ConfigMap, Secret. kubectl is your control plane. The standard for production container management. Today's DevOps/MLOps update (Apache Airflow): Apache Airflow Helm Chart 1.21.0 Significant Changes Workers config options have been moved under workers.celery.* and workers.kubernetes.* Please… https://lnkd.in/dacxZCNt Why it matters: K8s is the backbone of cloud-native infra. Understanding it is non-negotiable. #Kubernetes #K8s #DevOps #CloudNative
To view or add a comment, sign in
-
Ask your engineers how much time Kubernetes upgrades take each year and you’ll get a guess. Ask your incident history and calendars, and you get a different story. This piece looks at upgrades as an economics problem: where senior headcount goes, what gets delayed, and when it makes sense to stop owning all of that in‑house. Link in first comment. #Kubernetes #DevOps #TechStrategy
To view or add a comment, sign in
-
Explore related topics
- Latest Google Kubernetes Engine Feature Releases
- Kubernetes Cluster Setup for Development Teams
- Managing Kubernetes Resource Updates
- Kubernetes Architecture Layers and Components
- Kubernetes Cluster Validation Best Practices
- Kubernetes and Application Reliability Myths
- Kubernetes Trends in Enterprise Cloud Infrastructure
- Troubleshooting Kubernetes Rollout and Storage Issues
- Best Practices for Kubernetes Infrastructure and App Routing
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development