Day 47 of #100DaysOfDevOps My first instinct was to combine everything into one YAML file and call it done. Kubernetes had other plans. The task: deploy Grafana on a K8s cluster and expose it externally via a NodePort service on port 32000. Sounds straightforward — until you realize a Deployment and a Service aren't just "two sections in a file." They're two completely separate contracts. The Deployment's only job is to keep your pods alive and match a desired state. The Service's only job is to route traffic to those pods — using label selectors as the bridge between them. Break that label chain, and your service is just shouting into the void. Got it running with grafana/grafana:latest, container port 3000, NodePort 32000. Clean, separate YAMLs. Verified with kubectl get all and hit the UI at <node-ip>:32000. The real lesson: in Kubernetes, separation of concerns isn't just good practice — it's the architecture. Deployments manage lifecycle. Services manage access. They don't overlap, and they shouldn't. One mislabeled selector = silent failure with zero helpful errors. Labels are load-bearing. #100DaysOfDevOps #KodeKloud #Kubernetes #DevOps #CloudNative #K8s
Kubernetes Deployments and Services: Separation of Concerns
More Relevant Posts
-
Day 44 of #100DaysOfDevOps I deployed a Kubernetes pod, got the YAML "right" — and still failed the task. We needed a pod that prints environment variables on startup — GREETING, COMPANY, and GROUP — using a bash image with a one-shot command and restartPolicy: Never. Simple enough. I wrote the manifest, triple-checked the env values, the command syntax, the restart policy. Looked clean. It wasn't. The pod name was print-envars-greeting. And I'd used the same name for the container too — out of habit. The requirement said the container name should be print-env-container. One field, buried in spec, completely overlooked. Changed it, reapplied, done. spec: restartPolicy: Never containers: - name: print-env-container image: bash command: ["/bin/sh", "-c", 'echo "$GREETING $COMPANY $GROUP"'] env: - name: GREETING value: "Welcome to" - name: COMPANY value: "xFusionCorp" - name: GROUP value: "Group" Kubernetes doesn't care that your logic is right. It cares that your spec matches exactly. In real clusters, a wrong container name breaks log queries, metric scraping, and sidecar injection — silent failures that surface at the worst time. Read the requirements like a contract. Every field is a clause. #Kubernetes #DevOps #KodeKloud #CloudEngineering #LearningInPublic
To view or add a comment, sign in
-
The Kubernetes mistake that wastes 30 minutes every day: Using a single kubeconfig file for multiple clusters. 😅 Here's what happens: → You run a command → Wrong cluster responds → You panic and double-check everything → Repeat 10 times a day The fix is simple: Separate config files (config.dev, config.uat, config.prod) Set KUBECONFIG to auto-merge them Use kubectl config get-contexts to verify ✅ Switch contexts explicitly: kubectl config use-context admin-dev Bonus: Install k9s for visual cluster management 🎯 Now I know exactly which cluster I'm working with. Time saved per week: ~2.5 hours ⏰ Accidental production changes: 0 🛡️ What's your approach to managing multiple clusters? Would love to hear what's working for you. 💬 #Kubernetes #DevOps #BestPractices
To view or add a comment, sign in
-
Most Engineers use Kubernetes daily, but Cannot explain what happens when you run kubectl apply. Lets break it down. When you apply a manifest, here is what actually happens: - kubectl sends the YAML to the kube-apiserver - API server then validates the manifest, authenticates your request, and writes the desired state to etcd. - The scheduler watches for pods that are not assigned to any node. It scores available nodes, picks the best fit, and binds the pod to that node - The kubelet on that node sees the new pod assignment. It pulls the image, starts the container, and reports status back. - kube-proxy watches for service and endpoint changes. It updates iptables or IPVS rules so traffic can reach your pod Here is the interesting part. - The entire system is event-driven. No component polls others. - Each component watches the API server for changes and reacts only when needed. This is why Kubernetes is called a "desired state" system. You declare what you want. Kubernetes works in the background to make it happen. We break down the complete Kubernetes architecture and show what really happens behind the scenes when you run kubectl apply 𝗥𝗲𝗮𝗱 𝗶𝘁 𝗵𝗲𝗿𝗲: https://lnkd.in/gSB2GyXp #kubernetes #devops
To view or add a comment, sign in
-
-
Most Engineers use Kubernetes daily, but Cannot explain what happens when you run kubectl apply. Lets break it down. When you apply a manifest, here is what actually happens: - kubectl sends the YAML to the kube-apiserver - API server then validates the manifest, authenticates your request, and writes the desired state to etcd. - The scheduler watches for pods that are not assigned to any node. It scores available nodes, picks the best fit, and binds the pod to that node - The kubelet on that node sees the new pod assignment. It pulls the image, starts the container, and reports status back. - kube-proxy watches for service and endpoint changes. It updates iptables or IPVS rules so traffic can reach your pod Here is the interesting part. - The entire system is event-driven. No component polls others. - Each component watches the API server for changes and reacts only when needed. This is why Kubernetes is called a "desired state" system. You declare what you want. Kubernetes works in the background to make it happen. We break down the complete Kubernetes architecture and show what really happens behind the scenes when you run kubectl apply 𝗥𝗲𝗮𝗱 𝗶𝘁 𝗵𝗲𝗿𝗲: https://lnkd.in/gSB2GyXp #kubernetes #devops
To view or add a comment, sign in
-
-
"It runs on my machine" doesn't help when production goes down at 3 AM. A robust backend is only as good as the infrastructure it runs on. At Pyrvex, we treat Kubernetes not just as a host, but as a core part of our application architecture. I just published a new piece on how we harden K8s clusters for mission-critical workloads: 🛡️ Zero-downtime deployment strategies that actually work 🛡️ Graceful degradation when dependent services fail 🛡️ Why observability needs to be baked into your deployment pipeline Read the full breakdown here: [Link] How is your team handling K8s deployments right now? GitOps? Helm? Let me know below. #kubernetes #devops #sre #cloudnative
To view or add a comment, sign in
-
🔒 Immutable ConfigMaps & Secrets — an underrated Kubernetes feature worth knowing. Most teams leave ConfigMaps and Secrets mutable by default. That's fine — until it isn't. Here's what happens with mutable configs: → A change to a ConfigMap propagates to mounted volumes eventually, but env vars only refresh on pod restart → Pods running different replicas can observe different data during the rollout window → Every Kubelet on every node keeps a live watch on the API server — continuous polling overhead at scale Set 𝗶𝗺𝗺𝘂𝘁𝗮𝗯𝗹𝗲: 𝘁𝗿𝘂𝗲 and you get the opposite: ✅ Data is sealed at creation — the API server rejects any edits ✅ All pods are guaranteed to read identical data ✅ Kubelet stop watching — real relief at 100s of nodes Applies to both ConfigMap and Secret. GA since Kubernetes 1.21 and we were out here manually restarting pods and questioning our life choices the whole time. 💀 📌 𝗢𝗻𝗲 𝗰𝗮𝘃𝗲𝗮𝘁 𝘁𝗼 𝗸𝗻𝗼𝘄 𝘂𝗽𝗳𝗿𝗼𝗻𝘁: Immutability is a one-way door. You cannot flip 𝗶𝗺𝗺𝘂𝘁𝗮𝗯𝗹𝗲: 𝘁𝗿𝘂𝗲 back to 𝗳𝗮𝗹𝘀𝗲. To update, you create a new configmap , update your Deployment to reference it, let the rolling rollout complete, then delete the old one. That sounds like extra work — and it is, slightly 🥲. But it forces explicit, auditable config changes. No more "who changed this configmap/secret and when?" 😄 #Kubernetes #CloudNative #SRE #DevOps #CNCF #K8s #PlatformEngineering
To view or add a comment, sign in
-
-
You know how containers work. You've written a Deployment or two. But when your org says "we're moving to OpenShift" — what does that actually mean for your day-to-day? Here's the real breakdown 🧵 🔷 Kubernetes — Pure orchestration — you bring your own everything — RBAC is manual and explicit — No built-in image registry — CI/CD is your responsibility 🔶 OpenShift — K8s core + Red Hat's opinionated platform layer — SCCs replace PodSecurityPolicies — stricter by default — Internal registry + ImageStream built in — Tekton pipelines + Operator framework included What you'll actually feel day-to-day: ✅ Running containers as root is blocked by default in OpenShift. Your Dockerfiles need to drop root — this breaks a surprising amount of off-the-shelf images. ✅ oc vs kubectl — OpenShift's CLI is a superset. Most kubectl commands still work, but oc new-app and oc rollout add workflow shortcuts. ✅ Routes in OpenShift replace vanilla Ingress objects — same idea, different spec. If you have Helm charts with kind: Ingress, expect to refactor. ✅ Operators are first-class citizens in OpenShift. OperatorHub is the primary install mechanism for stateful workloads — Postgres, Kafka, Elasticsearch, etc. ✅ OpenShift uses Projects instead of Namespaces (still namespaces under the hood, but with extra quota and policy metadata attached). When to choose which: → Kubernetes: maximum flexibility, control your own stack, team can manage it. → OpenShift: compliance, RBAC, and integrated tooling matter more than customization. It trades flexibility for guardrails and vendor support. Neither is wrong — it's a tradeoff. #Kubernetes #OpenShift #DevOps #Containers #CloudNative #PlatformEngineering #RedHat #cloudcomputing #open_to_work #devops
To view or add a comment, sign in
-
-
Dear Kubernetes expert, Yes, we all know Deployments manage ReplicaSets. But when was the last time you directly interacted with a ReplicaSet? A ReplicaSet ensures a specified number of pod replicas are running at any given time. But here’s the catch, most of us never create them manually because Deployments abstract them away. So why should you care? 👇 🔹 Debugging: Ever noticed orphaned ReplicaSets lingering after updates? Understanding their lifecycle is key. 🔹 Custom Controllers: Building advanced patterns like canary or blue/green deploys sometimes requires ReplicaSet-level control. 🔹 Rollbacks: They rely on ReplicaSets, especially when tracking Deployment history. 🔹 Fine-grained Management: Need precise control over selector behavior? ReplicaSets give you that flexibility, Deployments enforce immutability of selectors for stability. Tip: Want to observe how a Deployment handles ReplicaSets? Run a rollout and watch the creation of new ReplicaSets while old ones get scaled down. ______________________________________________________ 🔁 If you found this useful, repost to help others find it, sharing is caring. 👨💻 Tag someone learning anything and everything Cloud-Native, Kubernetes & MLOps. 💾 Save this post for future reference. I post daily insights here, and break things down deeper in my weekly newsletter. Subscribe to stay updated. ______________________________________________________ #Kubernetes #DevOps #CloudNative #ReplicaSet #K8sTips #SRE #hellodeolu #learnin
To view or add a comment, sign in
-
-
Dear Kubernetes expert, Yes, we all know Deployments manage ReplicaSets. But when was the last time you directly interacted with a ReplicaSet? A ReplicaSet ensures a specified number of pod replicas are running at any given time. But here’s the catch, most of us never create them manually because Deployments abstract them away. So why should you care? 👇 🔹 Debugging: Ever noticed orphaned ReplicaSets lingering after updates? Understanding their lifecycle is key. 🔹 Custom Controllers: Building advanced patterns like canary or blue/green deploys sometimes requires ReplicaSet-level control. 🔹 Rollbacks: They rely on ReplicaSets, especially when tracking Deployment history. 🔹 Fine-grained Management: Need precise control over selector behavior? ReplicaSets give you that flexibility, Deployments enforce immutability of selectors for stability. Tip: Want to observe how a Deployment handles ReplicaSets? Run a rollout and watch the creation of new ReplicaSets while old ones get scaled down. ______________________________________________________ 🔁 If you found this useful, repost to help others find it, sharing is caring. 👨💻 Tag someone learning anything and everything Cloud-Native, Kubernetes & MLOps. 💾 Save this post for future reference. I post daily insights here, and break things down deeper in my weekly newsletter. Subscribe to stay updated. ______________________________________________________ #Kubernetes #DevOps #CloudNative #ReplicaSet #K8sTips #SRE #hellodeolu #learnin
To view or add a comment, sign in
-
-
A production service was failing to connect to the database. We updated the ConfigMap with the correct DB_HOST. Pods kept using the old value. Why? The ConfigMap was injected as environment variables. Kubernetes injects env vars only at container startup. Updating the ConfigMap did nothing until we restarted the pods. Takeaway: ConfigMaps don’t reload. Env vars are immutable at runtime. #InfraDecode #Kubernetes #DevOps
To view or add a comment, sign in
Explore related topics
- Kubernetes Deployment Tactics
- Simplifying Kubernetes Deployment for Developers
- Kubernetes Cluster Separation Strategies
- Kubernetes Cluster Setup for Development Teams
- How to Deploy Data Systems with Kubernetes
- Kubernetes Architecture Layers and Components
- Kubernetes Deployment Strategies on Google Cloud
- Kubernetes Scheduling Explained for Developers
- Kubernetes Deployment Strategies for Minimal Risk
- Kubernetes and Application Reliability Myths
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development