The container was dying. No error. Just gone. docker logs showed nothing useful. I restarted it. It died again. I read the Dockerfile three times. Nothing. Then a colleague said: "check dmesg." `dmesg | grep -i kill` OOM killer. The kernel had killed it because the container hit its memory limit. Docker set the limit, but the kernel enforced it. And only the kernel's log had the real information. That moment changed how I used to debug containers. The Docker layer is thin. The answer is almost always one layer below. cgroups enforce resource limits. Namespaces create isolation. The kernel manages both. Docker is the interface, not the implementation. If you're only reading docker logs, you're reading the summary. The real story is in dmesg. #Docker #Linux #DevOps #Containers #Infrastructure #SoftwareEngineering #CloudNative #opensource
Prashantkumar Khatri’s Post
More Relevant Posts
-
Docker networking is where a lot of beginner container setups quietly break. You can have two containers running on the same machine, both healthy, both reachable in isolation, and still hit connection errors when one service tries to talk to the other. The usual mistake: relying on the default bridge network and hoping container names work like hostnames. They do not. In the latest Levelling Docker video, I walk through the networking basics that make multi-container apps click: • Port mapping with -p, so your browser can reach a container • The default bridge network, and why it is limited • Custom bridge networks, where containers can resolve each other by name • Docker's built-in DNS on custom networks • A hands-on PostgreSQL + pgAdmin exercise wired together properly The key idea is simple: Create a custom network, put related containers on it, and connect by container name instead of chasing IP addresses. That one habit makes local Docker setups much less fragile. Link in the comments. #Docker #DevOps #Linux #ContainerNetworking #Tutorial
To view or add a comment, sign in
-
Kubernetes is not complicated. Your setup is. At its core, Kubernetes is just an abstraction on top of tools you already know. Linux namespaces, cgroups, iptables, DNS. It bundles them up, gives them a nice API, and calls it a day. That part is not the problem. The problem is everything you piled on top of it. The 47 admission controllers. The policy engines enforcing rules on plain YAML files. The custom operators for things that probably didn't need an operator. The internal platform that now sits between your engineers and the actual cluster. You didn't simplify ops, you just moved the complexity somewhere harder to see. And then troubleshooting becomes a nightmare. Not because Kubernetes is bad at surfacing problems, but because you've been working at the abstraction layer for so long that you forgot what's underneath. Something breaks and instead of checking what the kernel is actually doing, you stare at Helm values and wonder why the stars aren't aligned. The cluster isn't lying to you. Your understanding of it might be. What's the most convoluted thing you've seen someone do to a Kubernetes setup? I'm collecting horror stories 👻 .
To view or add a comment, sign in
-
Working code and production ready code are not the same thing. I wrote a Bash disk monitoring script. It ran. It logged. It alerted. Technically correct. Then I did a senior engineer code review on it. Five gaps found in under 5 minutes: Stale timestamps:captured once at startup, not per log entry No log rotation: a disk monitor that fills your disk is not a disk monitor No dependency checks: fails cryptically on minimal servers Inconsistent log formatting: breaks any tool that parses those logs Silent on healthy systems: looks broken when it is working fine None of these are bugs. All of them matter in production. Now I fix it. #DevOps #Linux #BashScripting
To view or add a comment, sign in
-
-
Day 32 – Docker Volumes & Networking Today understood two critical real-world problems: 👉 Why containers lose data 👉 How containers communicate with each other 💥 What I did today: Ran a database container and created data Deleted the container… and data was gone Fixed it using named volumes → data persisted even after container removal Explored bind mounts → real-time file changes from host to container Learned Docker networking basics (bridge vs custom network) Created a custom network and saw containers communicate using names instead of IPs 🧠 Key Takeaways: Containers are ephemeral = use volumes for persistence Named volumes = managed by Docker Bind mounts = direct connection to host files Default bridge network has limitations Custom networks enable service discovery (name-based communication) #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham #Docker #DevOps #CloudComputing #Containers #BackendDevelopment #LearningInPublic #TechJourney #SoftwareEngineering #Linux #OpenSource #BuildInPublic
To view or add a comment, sign in
-
Here's the mental model I wish I had when I started learning Docker. Not a diagram. A question. The question is: what changed? When you run a container, Linux creates a new set of answers to certain questions: What processes exist? What filesystems are mounted? What is the hostname? Before the container, those questions had the host's answers. After docker run, the process inside gets different answers. Same kernel. Different questions, different answers. The namespace is the mechanism that changes which answer the kernel returns. cgroups are completely separate. They don't change what the container sees. They change what the container is allowed to consume. Namespaces hide. cgroups limit. Most Docker mental models collapse these into one thing called "isolation." They're not the same, and knowing which one you need for which problem changes how you write Dockerfiles, configure orchestrators, and think about security. Not tutorials. Just a real picture. #Docker #Linux #DevOps #Containers #Infrastructure #CloudNative #SoftwareEngineering #MentalModel #OpenSource
To view or add a comment, sign in
-
𝗜 𝗯𝘂𝗶𝗹𝘁 𝘁𝗵𝗲 𝗺𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴. 𝗧𝗵𝗲𝗻 𝗜 𝗯𝗿𝗼𝗸𝗲 𝘀𝗼𝗺𝗲𝘁𝗵𝗶𝗻𝗴 𝗼𝗻 𝗽𝘂𝗿𝗽𝗼𝘀𝗲 𝘁𝗼 𝘀𝗲𝗲 𝗶𝗳 𝗶𝘁 𝘄𝗼𝗿𝗸𝗲𝗱. (𝗣𝗮𝗿𝘁 𝟮 𝗼𝗳 𝟮 | 𝗣𝗮𝗿𝘁 𝟭 𝗰𝗼𝘃𝗲𝗿𝘀 𝘁𝗵𝗲 𝗯𝘂𝗶𝗹𝗱 𝗶𝗳 𝘆𝗼𝘂 𝗺𝗶𝘀𝘀𝗲𝗱 𝗶𝘁.) Uptime Kuma is watching Proxmox, both VMs, and every service running on THE MONOLITH. I shut down the Linux VM deliberately. Within seconds, red on the dashboard, Telegram notification fired. This is the part tutorials skip. Setting up monitoring is one thing. Knowing it actually works when something goes down is another. You don't find out by reading about it. In production this is the difference between finding out a server is down from your monitoring system or finding out from a client. Phase 3 is K3s. I've never touched Kubernetes. THE MONOLITH is about to become a cluster. Let's find out what breaks first. #docker #devops #homelab #monitoring #uptimekuma #learninginpublic
To view or add a comment, sign in
-
Day 4 of my DevOps Journey 🚀 6 months ago I didn't know what Nginx was. Today I ran a full troubleshooting runbook on it. 🙌 If you're new to Linux and DevOps, here's the order I follow to verify a web server is healthy: 1️⃣ Check the OS & kernel — know your environment 2️⃣ Check CPU — top (99.9% idle = good) 3️⃣ Check Memory — free -h (3.2 GB free = good) 4️⃣ Check Disk — df -h (2% used = good) 5️⃣ Check Networking — ss -tuln (port 80 listening = good) 6️⃣ Verify service — curl http://localhost (200 OK = nginx is live) 7️⃣ Read logs — journalctl -u nginx (understand service history) No bootcamp. No course. Just hands-on practice on a real Ubuntu VM. If I can learn this, so can you. 💪 #Linux #Nginx #DevOps #Beginners #LearningInPublic #Ubuntu #SysAdmin #CareerChange #DevOpsEngineer #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham
To view or add a comment, sign in
-
-
🚀 Load Balancing in Kubernetes Using Round Robin! One of the core concepts in distributed systems — and I just explored it hands-on! 🔁 Here's a quick breakdown: ✅ Round Robin distributes incoming requests sequentially across multiple servers/pods — ensuring no single server gets overwhelmed. ⚙️ How it works in Kubernetes: • A Kubernetes Service (ClusterIP) acts as the entry point • kube-proxy routes traffic in round robin order across all available pods • Each pod handles requests one by one in a cyclic loop 🛠️ What I did: 1️⃣ Deployed 3 HTTPD pod replicas 2️⃣ Exposed them via a Kubernetes Service 3️⃣ Modified each pod's response to uniquely identify it 4️⃣ Verified round robin behavior — pod1 → pod2 → pod3 → pod1... ✔️ Special thanks to mentor Ashutosh S. Bhakare for guidance #Kubernetes #DevOps #CloudComputing #LoadBalancing #RoundRobin #Docker #K8s #Linux #LearningInPublic
To view or add a comment, sign in
-
We (me and claude😅) published a guide on running Kubernetes on Shani OS. Shani OS has an immutable root filesystem. Kubernetes data lives in separate Btrfs subvolumes — @containers for runtime state, @home for kubeconfigs and manifests. OS updates and cluster upgrades don't interfere with each other. The guide covers: — Choosing a distribution (k3s, k0s, MicroK8s, RKE2, kubeadm, Talos) — Cilium as the CNI — eBPF networking, L7 policy, WireGuard encryption, Hubble — NGINX Gateway Fabric and the Gateway API — cert-manager for automatic TLS — Longhorn, Rook-Ceph, and MinIO for storage — ArgoCD, Flux, Kargo, and Argo Rollouts for GitOps and progressive delivery — Prometheus, Loki, Grafana Tempo for observability — Kyverno, Falco, Cosign, Sealed Secrets, and ESO for security — Velero and etcd snapshots for backup The blog post explains the reasoning behind each choice. The full reference wiki has the complete commands, YAML, and troubleshooting tables. https://lnkd.in/gnk9ZRhh #Kubernetes #SelfHosting #DevOps #Linux #ShaniOS
To view or add a comment, sign in
-
I'd been lying to myself about containers for years. I could docker run anything. Write a Dockerfile in my sleep. I could not have told you, with a straight face, what actually happens between docker run alpine and a process existing on my machine. A fly.io take-home ~6 months back forced me to find out. Wrote it up, sat on the draft, forgot to share. Publishing now because the learning was too good to leave rotting in ~/drafts. I built it from scratch. Pull image from S3. Unpack into a devicemapper thinpool. Snapshot the thin volume to "activate" it. Track the whole thing in SQLite. Drive it with the same FSM library that powers flyd. Three things broke my mental model: 1. A container image is a tar of tars plus a JSON map. That's it. No runtime, no kernel feature embedded. The image is inert until something else unpacks and runs it. 2. "Activating" an image is two dmsetup calls. Zero bytes copied. A snapshot of a thin volume shares every block with the parent until somebody writes. That's how one host runs thousands of VMs without burning through its NVMe in a week. 3. The FSM isn't ceremony. My first version used goroutines and a retry loop — clean, fast, dead on reboot. Fleet-scale orchestration isn't the goroutine. It's the durable record of states the goroutine left behind. Goroutines die. SQLite rows don't. The primitives have shipped in mainline Linux since 2011. The discipline is choosing them deliberately and refusing to add anything else. Full write-up, including a hands-on walkthrough of pulling ubuntu:24.04 into a thinpool with dmsetup + losetup + skopeo — every output from a live SSH session: https://lnkd.in/g_kAC_vM What's the abstraction you assumed was magic until you had to build it? #containers #linux #devops #infrastructure #golang
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development