🚀 Day 26-28 of #90DaysOfDevOps — Mastering Kubernetes Persistent Storage (PV & PVC) Today I worked on one of the most important real-world Kubernetes concepts: Persistent Volumes (PV) and Persistent Volume Claims (PVC) — and this is where Kubernetes truly starts feeling like production engineering. 🔍 Problem I Explored Containers are ephemeral. I created a Pod using emptyDir and wrote timestamps to a file. Deleted the Pod → recreated it → data was gone ❌ 👉 Lesson: emptyDir is tied to the Pod lifecycle → not suitable for databases or stateful applications. 💡 Solution: Persistent Storage with PV & PVC Implemented: PersistentVolume (PV) → actual storage resource PersistentVolumeClaim (PVC) → request abstraction 📌 Flow: Pod → PVC → PV → Physical Storage ⚙️ What I Practiced ✅ Static Provisioning Created PV manually (hostPath) Created PVC → successfully bound Mounted PVC in Pod → data persisted even after Pod deletion ✅ 🚀 Dynamic Provisioning (Real-world scenario) Used default StorageClass (standard) Created ONLY PVC → Kubernetes auto-created PV Learned about: provisioner: rancher.io/local-path reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 👉 Key Insight: PV is created only when a Pod uses the PVC 🔥 Major Debugging Moment PVC was stuck in Pending Root cause: ❌ StorageClass mismatch Fix: storageClassName: "" 👉 Key learning: Kubernetes matches PV & PVC based on: Storage capacity Access modes StorageClass (Not by name or path) 🧠 Reclaim Policy (Critical Concept) After cleanup: Dynamic PV → ❌ Auto-deleted (Delete) Static PV → ✅ Retained (Released) 👉 Important for: Data safety (Retain) Automation & cost efficiency (Delete) 📊 Key Takeaways ✔️ Containers are ephemeral ✔️ PVC abstracts storage ✔️ PV provides actual storage ✔️ Dynamic provisioning is default ✔️ Reclaim policies control data lifecycle 💻 GitHub Repo (Hands-on Implementation): https://lnkd.in/dQdrNNEd This was one of the most practical DevOps learnings so far — felt like working on real infrastructure 🔥 #Kubernetes #DevOps #CloudComputing #Containers #Docker #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubhams
Mastering Kubernetes Persistent Storage (PV & PVC)
More Relevant Posts
-
✨ 𝗗𝗮𝘆 𝟬𝟮 – 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 & 𝗛𝗼𝘄 𝗜𝘁 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗪𝗼𝗿𝗸𝘀 ☸️ Today I went beyond basic concepts and explored how Kubernetes actually works internally. Understanding the architecture made things much clearer — it’s not just a tool, it’s a system constantly working to maintain stability. 🔹 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 𝗣𝗹𝗮𝗻𝗲 (𝗠𝗮𝘀𝘁𝗲𝗿) • The brain of Kubernetes • Manages the entire cluster • Handles scheduling, scaling, and maintaining system state 🔹 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗔𝗣𝗜 (𝗔𝗣𝗜 𝗦𝗲𝗿𝘃𝗲𝗿) • Entry point for all cluster communication • Every request (kubectl / tools) goes through it • Validates and processes configurations 🔹 𝗣𝗼𝗱 • Smallest deployable unit • Runs one or more containers • Represents your actual application 🔹 𝗥𝗲𝗽𝗹𝗶𝗰𝗮 𝗦𝗲𝘁 • Ensures desired number of pods are always running • Automatically replaces failed pods • Provides high availability 🔹 𝗛𝗼𝘄 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘀𝘁𝗲𝘀 𝗪𝗼𝗿𝗸𝘀 (𝗕𝗮𝘀𝗶𝗰 𝗙𝗹𝗼𝘄) • You define the desired state (e.g., 3 replicas) in YAML • Request is sent to the API Server • Configuration is stored in etcd (cluster database) • Scheduler assigns pods to the best available nodes • Controller Manager ensures the desired state is maintained • Kubelet on nodes makes sure containers are running properly • Container runtime runs the actual containers • Replica Set recreates pods if they fail • Continuous reconciliation loop ensures: Desired State = Actual State 🔹 𝗞𝗲𝘆 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 Kubernetes follows a declarative approach — you tell it what you want, not how to do it. 💡 𝗧𝗼𝗱𝗮𝘆’𝘀 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆 Kubernetes is a self-healing system — it constantly monitors, adjusts, and ensures your applications run reliably without manual intervention. #Docker #Kubernetes #Containerization #DevOps #CloudComputing #LearningInPublic #TechLearning #DevOpsJourney #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 Kubernetes looks simple — until you type `kubectl apply`. 👨💻 Imagine a typical scenario. An engineer prepares a new version of an online service. They update the Docker image, modify a ConfigMap or Secret, and execute a familiar command: `kubectl apply -f deployment.yaml` From their perspective, it’s just another routine change. The service updates, and everything continues to run smoothly. But inside the Kubernetes cluster, a sophisticated architecture comes to life. 🔹 The request first reaches the API Server — the central entry point of the cluster. 🔹 The desired state is persisted in etcd, Kubernetes’ source of truth. 🔹 The Scheduler selects the most suitable node for the workload. 🔹 The Controller Manager ensures that the actual state matches the desired state. 🔹 The kubelet on the selected node pulls the container image and starts the application. 🔹 ConfigMaps and Secrets are injected into the running environment. ⚡ All of this happens within seconds, often unnoticed by the engineer initiating the change. 🧠 This is where Architectural Thinking becomes essential. Understanding how these components interact allows engineers to design more resilient, scalable, and reliable systems. It transforms Kubernetes from a simple operational tool into a strategic architectural platform. 🎯 Architectural Thinking is not about running commands — it’s about understanding the systems behind them. #Kubernetes #SoftwareArchitecture #ArchitecturalThinking #DevOps #CloudNative #PlatformEngineering
To view or add a comment, sign in
-
For advanced Kubernetes operations, here are some commands that go beyond basic resource management: # kubectl debug: Creates an ephemeral container for troubleshooting a running pod without restarting it (e.g., kubectl debug -it <pod-name> --image=busybox). # kubectl port-forward: Maps a local port to a port inside a pod, useful for accessing services not exposed publicly (e.g., kubectl port-forward <pod-name> 8080:80). # kubectl rollout undo: Reverts a deployment to its previous revision if an update fails (e.g., kubectl rollout undo deployment/<name>). # kubectl get events --sort-by='.metadata.creationTimestamp': Lists cluster events ordered by time to help identify issues chronologically. # kubectl patch: Updates specific fields of a resource in place without needing a full YAML rewrite (e.g., kubectl patch deployment <name> -p '{"spec":{"replicas":3}}'). # kubectl cordon / kubectl uncordon: Marks a node as unschedulable (cordon) or re-enables scheduling (uncordon) for maintenance. # kubectl explain <resource>: Provides documentation and field descriptions for specific K8s resources directly in your terminal (e.g., kubectl explain pod.spec). #kubernetes #advcance #DevOps #SRE
To view or add a comment, sign in
-
-
I built a self-healing container orchestration system from scratch Docker changed how we ship code. Lightweight isolation, reproducible environments, and true portability — it completely reshaped modern infrastructure. Why Docker matters: Lightweight isolation using namespaces and cgroups Consistent environments from development to production Portable across any infrastructure Strong security primitives like capability dropping, seccomp, and read-only filesystems But Docker alone isn’t enough. To run production systems, you also need: 📈 Auto-scaling — to handle traffic spikes automatically ⚖️ Load balancing — to distribute requests efficiently 🔁 Self-healing — because containers will fail, and something must recover them The project: I built a custom orchestration system where a Rust application runs on a manager node, coordinating worker nodes and maintaining system health in real time. The stack: 📊 cAdvisor — container-level resource monitoring 🔁 Jenkins — CI/CD automation 🌐 Traefik — dynamic reverse proxy and load balancing 📈 Grafana + Prometheus — real-time metrics and observability The brain: A Rust service running on the manager node that: Consumes metrics from Prometheus and cAdvisor Makes scaling decisions based on real-time load Keeps worker nodes synchronized Automatically spins up containers during traffic spikes Detects unhealthy instances and replaces them seamlessly Why Rust? Because performance and reliability matter. Rust provides memory safety without a garbage collector, ensuring predictable behavior when making real-time orchestration decisions. And this is what makes DevOps exciting to me — building the systems behind the scenes that keep everything running smoothly. Containers scaling, traffic balancing, metrics flowing — no magic, just solid engineering. The takeaway: You don’t always need Kubernetes. Sometimes, the right combination of tools — and the ability to connect them — is more than enough. 🔧 Tools: Docker | Rust | cAdvisor | Jenkins | Traefik | Grafana | Prometheus #DevOps #Docker #RustLang #ContainerOrchestration #AutoScaling #SysAdmin #PlatformEngineering #InfrastructureAsCode #Monitoring #Prometheus #Grafana #Traefik #Containers #SelfHealingInfrastructure
To view or add a comment, sign in
-
-
Most people use Kubernetes. Very few actually understand what’s happening under the hood. Here’s a simple breakdown of what this architecture diagram is really showing 👇 At the center, you have the Control Plane — the brain of Kubernetes. This is where decisions are made. • API Server → the entry point. Every request (kubectl, CI/CD, UI) goes through this. • Scheduler → decides where your pods should run based on resources and constraints. • Controller Manager → constantly checks “desired state vs actual state” and fixes gaps. • etcd → the database. Stores the entire cluster state. If this is gone, your cluster memory is gone. Then comes the Worker Nodes — where real work happens. Each node contains: • Kubelet → talks to control plane and ensures containers are running as expected • Container Runtime → actually runs containers (Docker / containerd) • Kube Proxy → handles networking and service communication Now here’s the part beginners ignore: Kubernetes is not about containers. It’s about desired state reconciliation. You don’t tell Kubernetes how to run things. You tell it what you want, and it keeps trying until reality matches that. That’s why: • Pods restart automatically • Scaling happens without manual intervention • Failures don’t require panic But here’s the uncomfortable truth: If you don’t understand this flow, you’re just memorizing commands — not building systems. And that’s exactly why most “Kubernetes learners” get stuck at tutorials. Real skill = understanding: Control Plane → Node → Pod → Networking → Self-healing loop If this diagram finally makes sense to you, you’re no longer a beginner. You’re starting to think like a systems engineer. #Kubernetes #DevOps #CloudComputing #Containers #SystemDesign #LearningInPublic
To view or add a comment, sign in
-
-
Kubernetes stops feeling complex the moment you stop looking at it as a collection of commands and start seeing it as a system that continuously reconciles reality with intention. Over the past weeks i’ve been exploring Kubernetes (k8s) from the inside out, not just running workloads but understanding why the platform behaves the way it does. At its core, kubernetes is not about containers. Containers are just the outcome. The real power lies in its architecture: You declare a desired state. The control plane observes. Controllers compare expectation vs reality. The scheduler decides placement. kubelet executes and maintains workloads on nodes. Everything operates as a continuous feedback loop. Working through a single-node cluster using Minikube and Docker made this especially clear. When you create a Deployment you are not starting containers, you are defining intent. Kubernetes then distributes responsibility across its components to make that intent real and keep it real even when failures occur. Some insights that stand out: • Pods are ephemeral execution units, not long-lived servers • Deployments are state definitions, not runtime processes • Services abstract instability, not just networking • Namespaces introduce logical isolation rather than infrastructure separation • Imperative commands help exploration, but declarative configuration defines reliability The shift from imperative to declarative thinking is where Kubernetes truly clicks. Instead of managing systems step-by-step you design outcomes and let the platform enforce consistency. Kubernetes is not just orchestration, it’s automated operational reasoning encoded into software. #Kubernetes #DevOps #Containerization #SoftwareEngineering #InfrastructureAsCode #CloudComputing #Docker #K8s #Automation #OpenSource
To view or add a comment, sign in
-
-
Dear Kubernetes expert, Yes, we all know Deployments manage ReplicaSets. But when was the last time you directly interacted with a ReplicaSet? A ReplicaSet ensures a specified number of pod replicas are running at any given time. But here’s the catch, most of us never create them manually because Deployments abstract them away. So why should you care? 👇 🔹 Debugging: Ever noticed orphaned ReplicaSets lingering after updates? Understanding their lifecycle is key. 🔹 Custom Controllers: Building advanced patterns like canary or blue/green deploys sometimes requires ReplicaSet-level control. 🔹 Rollbacks: They rely on ReplicaSets, especially when tracking Deployment history. 🔹 Fine-grained Management: Need precise control over selector behavior? ReplicaSets give you that flexibility, Deployments enforce immutability of selectors for stability. Tip: Want to observe how a Deployment handles ReplicaSets? Run a rollout and watch the creation of new ReplicaSets while old ones get scaled down. ______________________________________________________ 🔁 If you found this useful, repost to help others find it, sharing is caring. 👨💻 Tag someone learning anything and everything Cloud-Native, Kubernetes & MLOps. 💾 Save this post for future reference. I post daily insights here, and break things down deeper in my weekly newsletter. Subscribe to stay updated. ______________________________________________________ #Kubernetes #DevOps #CloudNative #ReplicaSet #K8sTips #SRE #hellodeolu #learnin
To view or add a comment, sign in
-
-
Dear Kubernetes expert, Yes, we all know Deployments manage ReplicaSets. But when was the last time you directly interacted with a ReplicaSet? A ReplicaSet ensures a specified number of pod replicas are running at any given time. But here’s the catch, most of us never create them manually because Deployments abstract them away. So why should you care? 👇 🔹 Debugging: Ever noticed orphaned ReplicaSets lingering after updates? Understanding their lifecycle is key. 🔹 Custom Controllers: Building advanced patterns like canary or blue/green deploys sometimes requires ReplicaSet-level control. 🔹 Rollbacks: They rely on ReplicaSets, especially when tracking Deployment history. 🔹 Fine-grained Management: Need precise control over selector behavior? ReplicaSets give you that flexibility, Deployments enforce immutability of selectors for stability. Tip: Want to observe how a Deployment handles ReplicaSets? Run a rollout and watch the creation of new ReplicaSets while old ones get scaled down. ______________________________________________________ 🔁 If you found this useful, repost to help others find it, sharing is caring. 👨💻 Tag someone learning anything and everything Cloud-Native, Kubernetes & MLOps. 💾 Save this post for future reference. I post daily insights here, and break things down deeper in my weekly newsletter. Subscribe to stay updated. ______________________________________________________ #Kubernetes #DevOps #CloudNative #ReplicaSet #K8sTips #SRE #hellodeolu #learnin
To view or add a comment, sign in
-
-
Most Engineers use Kubernetes daily, but Cannot explain what happens when you run kubectl apply. Lets break it down. When you apply a manifest, here is what actually happens: - kubectl sends the YAML to the kube-apiserver - API server then validates the manifest, authenticates your request, and writes the desired state to etcd. - The scheduler watches for pods that are not assigned to any node. It scores available nodes, picks the best fit, and binds the pod to that node - The kubelet on that node sees the new pod assignment. It pulls the image, starts the container, and reports status back. - kube-proxy watches for service and endpoint changes. It updates iptables or IPVS rules so traffic can reach your pod Here is the interesting part. - The entire system is event-driven. No component polls others. - Each component watches the API server for changes and reacts only when needed. This is why Kubernetes is called a "desired state" system. You declare what you want. Kubernetes works in the background to make it happen. We break down the complete Kubernetes architecture and show what really happens behind the scenes when you run kubectl apply 𝗥𝗲𝗮𝗱 𝗶𝘁 𝗵𝗲𝗿𝗲: https://lnkd.in/gSB2GyXp #kubernetes #devops
To view or add a comment, sign in
-
-
Most Engineers use Kubernetes daily, but Cannot explain what happens when you run kubectl apply. Lets break it down. When you apply a manifest, here is what actually happens: - kubectl sends the YAML to the kube-apiserver - API server then validates the manifest, authenticates your request, and writes the desired state to etcd. - The scheduler watches for pods that are not assigned to any node. It scores available nodes, picks the best fit, and binds the pod to that node - The kubelet on that node sees the new pod assignment. It pulls the image, starts the container, and reports status back. - kube-proxy watches for service and endpoint changes. It updates iptables or IPVS rules so traffic can reach your pod Here is the interesting part. - The entire system is event-driven. No component polls others. - Each component watches the API server for changes and reacts only when needed. This is why Kubernetes is called a "desired state" system. You declare what you want. Kubernetes works in the background to make it happen. We break down the complete Kubernetes architecture and show what really happens behind the scenes when you run kubectl apply 𝗥𝗲𝗮𝗱 𝗶𝘁 𝗵𝗲𝗿𝗲: https://lnkd.in/gSB2GyXp #kubernetes #devops
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development