𝐘𝐨𝐮𝐫 𝐂𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫 𝐢𝐬 𝐁𝐫𝐨𝐤𝐞𝐧. 𝐇𝐨𝐰 𝐃𝐨 𝐘𝐨𝐮 𝐋𝐨𝐨𝐤 𝐈𝐧𝐬𝐢𝐝𝐞? [Docker Deep Dive — Day 5/5] It is a very simple statement that is used to inspect inside a running container. But this is an interviewer's favourite question: Read once, do it once. You will never forget it. Your container is misbehaving in production. Logs tell you nothing. You need to board the ship and inspect it yourself. This is exactly what exec is for. A running container is a ship at sea — isolated, self-contained, fully operational. From the shore you can only watch it. But with docker exec -it, you drop a ladder through the hatch and step inside — a live shell, inside the running environment, while it keeps sailing. -i keeps the connection open. -t gives you a proper terminal. Together they hand you the wheel room of a live vessel. bash # Docker docker exec -it <container_id> /bin/bash # Kubernetes kubectl exec -it <pod_name> -- /bin/bash # Check logs inside cat /app/logs/error.log # Check running processes ps aux 𝐅𝐀𝐐: Q: What is the difference between docker exec and kubectl exec? Same idea, different fleet. docker exec boards a standalone container. kubectl exec boards a pod inside a Kubernetes cluster. Both drop you into a live shell on a moving ship. Q: Does exec change the container permanently? No. Any changes you make inside vanish when the container stops. You are boarding the ship, not rebuilding it. For permanent changes, update the Dockerfile and rebuild the image. Q: When would you NOT use exec in production? When your containers are ephemeral and immutable by design. Best practice is to fix the Dockerfile, redeploy, and read external logs — not board a live ship mid-voyage. Exec is a debugging tool, not a deployment strategy. Q: What if bash is not available inside the container? Minimal images like Alpine do not ship bash. Use /bin/sh instead — a lighter shell that is almost always present. Next series: Kubernetes Architecture — the control plane, data plane, and why the API server is the heartbeat of your entire cluster. #DevOps #Docker #Containers #kubectl #DevOpsInterview #CloudEngineering #DockerDeepDive #Kubernetes
Docker exec vs kubectl exec: Inspecting Containers and Pods
More Relevant Posts
-
#Day51 of #90DaysOfDevOps — Kubernetes finally clicked 🔥 Pods aren’t just theory anymore… I built, explored, and truly understood how they run behind the scenes ⚙️ Here’s what I worked on : Deep dive into Kubernetes Pods Understanding Pod lifecycle & architecture Hands-on practice with creating and managing Pods Exploring how containers run inside Pods My biggest takeaway: Pods are the core building block of Kubernetes. 51 days in, and the journey keeps getting more exciting. Everything documented here - https://lnkd.in/gGG3XCpU #DevOps #Kubernetes #90DaysOfDevOps #CloudComputing #TechJourney #Consistency #TrainWithShubham
To view or add a comment, sign in
-
I just released dispatchd v0.1.0! It started as a deep dive into Go/distributed systems, and has now evolved into something I’m ready to share. It’s a distributed task orchestration platform designed to handle coordination, failure recovery, and provide observable system behavior at scale The current architecture consists of: - gRPC control plane + bidirectional worker streams - shared Postgres + Redis state for durable orchestration and low-latency coordination - scheduler leadership, retries, dead-lettering, and a distributed assignment flow - Kubernetes, Kustomize, GitOps w/ Argo CD - GitHub Actions CI/CD with DevSecOps checks built into the pipeline - Prometheus, Grafana, Jaeger, and published performance/reliability evidence (if you can't measure it, it didn't happen, right?) I’m intentionally keeping it at v0.1.0. This is the baseline, not the finish line. I’m focusing on tackling the hard questions: state coordination across services, partial failure recovery, and establishing security boundaries that I can actually defend with evidence The next steps are already planned: - enforced Zero-Trust security policies - active/Passive multi-region controls - resilience and disaster recovery (DR) drills. I’ve formalized the repo with SemVer, a security policy, and even some contribution guidelines, so if you're a distributed systems nerd like me, come join the fun! Check it out, break it, and let me know what you think. I'm all about building, learning, and (respectful) criticism. Repo: https://lnkd.in/dtsjhRXM (And if you like the project, a ⭐️ is always appreciated! Hehe) #Go #SoftwareEngineering #Kubernetes #DevSecOps #DistributedSystems #Backend
To view or add a comment, sign in
-
🚀 Just shipped my fully containerized ELK Stack monitoring project! I wanted to move beyond theory and really understand how production observability pipelines work. So I built one from scratch using Docker. 𝗧𝗵𝗲 𝗦𝗲𝘁𝘂𝗽: A 4-node Elasticsearch cluster (Elasticsearch 8.10.2) with dedicated roles: 1 Master node (cluster management) 1 Data node (stores the indices) 1 Ingest node (handles data pipelines) 1 Coordinating node (manages client requests) Kibana for beautiful visualizations Metricbeat collecting real-time host metrics (CPU, memory, disk, network) + per-container Docker metrics every 10 seconds Metrics are nicely separated into two daily indices: system-metrics-* and docker-metrics-*. 𝗪𝗵𝗮𝘁 𝗜 𝗟𝗲𝗮𝗿𝗻𝗲𝗱: This project taught me why we never put everything on a single Elasticsearch instance in real environments. Separating roles helped me see how Elasticsearch actually distributes workload efficiently — master for coordination, data for storage, ingest for processing, and coordinating for handling queries. It was eye-opening! I also got much better at Docker Compose, health checks, and ordering service startup so everything comes up reliably. 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀 𝗜 𝗙𝗮𝗰𝗲𝗱: - Configuring multi-node roles correctly and making sure nodes could discover each other - Setting proper memory limits and vm.max_map_count for Elasticsearch in Docker - Getting Metricbeat to collect Docker container metrics smoothly - Managing index templates and making sure data flowed into separate indices without issues It took some debugging, but solving these made the learning 10x better. The full code + detailed documentation is now open on GitHub. If you're into DevOps, learning monitoring, or just want to play with a real ELK setup, feel free to clone it and break/fix things yourself! Would love to hear your thoughts or any tips from your own observability journeys 👇 #DevOps #ELKStack #Elasticsearch #Docker #Monitoring #OpenSource #LearningByDoing
To view or add a comment, sign in
-
-
We hit 93% Kubernetes conformance. In Rust. From scratch. Not a wrapper. Not a partial mock. A ground-up reimplementation of Kubernetes — API server, scheduler, controller manager, kubelet, kube-proxy — 216,000+ lines of Rust across 10 crates. 410 out of 441 official K8s conformance tests passing. 159 rounds of testing. 31 controllers. Real pods. Real services. Real networking. This started as "what if Kubernetes was written in Rust?" and became an actual cluster you can kubectl apply to. The conformance suite doesn't care what language you wrote it in. It just checks: does your cluster behave like Kubernetes? 93% of the time, ours does. Built with Claude Code as my engineering partner — analyzing K8s source, debugging watch protocol races at 2am, tracing client-go retry logic to find why kubectl explain couldn't find a CRD. AI didn't replace the architecture decisions, but it made a mass-scale project like this possible for a solo developer. The remaining 7% is mostly macOS VM platform limitations (virtiofs permissions, missing kernel modules for NodePort routing) and controller timing. The code is correct — the environment is the constraint. Open source: https://lnkd.in/gQJ_x9Dv #rust #kubernetes #systems #engineering #opensource
To view or add a comment, sign in
-
My pipeline encountered a failure before it even began. The code was correct, and the YAML was configured properly, but I overlooked something entirely different. I developed a CI/CD quality gate for LevelUp Bank, which automatically blocks any pull request to the main branch if the README.md or .gitignore files are missing. Each merge generates a structured JSON audit log sent directly to AWS CloudWatch, organized into beta and prod log groups. Unit tests are executed first, ensuring that nothing is logged until the tool itself is verified. However, when I triggered the beta workflow for the first time, it failed immediately due to a single line of error: the beta environment did not exist in the repository settings. It wasn't broken code or a misconfigured secret; it was simply a settings page I had never accessed. After navigating to Settings, then Environments, I created the beta and prod environments and re-ran the workflow, which passed in seconds. This experience taught me an important lesson: the best automation fails without the necessary environment in place. It's crucial to build the code and then verify everything the code requires to function correctly. These are two distinct checklists, and I had only completed one. The full code and setup guide is available on GitHub; the link is in the first comment. What is your best "it was not even the code" moment? Share below. #DevOps #GitHubActions #AWS #CloudWatch #PlatformEngineering #CICD #Python #LearningInPublic #CloudEngineering #SoftwareEngineering #LevelUpInTech #TechCommunity
To view or add a comment, sign in
-
-
Hello Connections 👩🎓 Just published a new blog on Kubernetes where I tried to connect some important core concepts that are often confusing at first. In this blog, I’ve explained how Kubernetes moved away from Docker using CRI, how scheduling actually works behind the scenes, and concepts like taints, tolerations, node affinity, and static pods in a simple way. Tried to keep it clear and beginner-friendly. Link 🖇️ - https://lnkd.in/gxFkApnd . . . #Kubernetes #DevOps #Containers #CloudComputing
To view or add a comment, sign in
-
🚀 𝗗𝗮𝘆 𝟮𝟰/𝟭𝟬𝟬: Mastering Production-Grade Clusters with Kubeadm & Containerd Yesterday was about local "simulators" like 𝗠𝗶𝗻𝗶𝗸𝘂𝗯𝗲 and 𝗞𝗶𝗻𝗱. Today, I leveled up to the real deal: bootstrapping a multi-node cluster from scratch using 𝗞𝘂𝗯𝗲𝗮𝗱𝗺 and diving into the modern Kubernetes runtime architecture. 𝐖𝐡𝐚𝐭 𝐈 𝐂𝐨𝐯𝐞𝐫𝐞𝗱 𝐓𝐨𝐝𝐚𝐲: • 𝗞𝘂𝗯𝗲𝗮𝗱𝗺 & 𝗞𝘂𝗯𝗲𝗰𝘁𝗹: ➣ 𝗞𝘂𝗯𝗲𝗮𝗱𝗺: The authoritative tool for setting up a "real" Kubernetes cluster. It handles the heavy lifting of initializing the Control Plane and joining worker nodes. ➣ 𝗞𝘂𝗯𝗲𝗰𝘁𝗹: My primary interface. I spent time refining my CLI skills to manage cluster resources more efficiently. • 𝗧𝗵𝗲 𝗥𝘂𝗻𝘁𝗶𝗺𝗲 𝗥𝗲𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻: 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝗱 & 𝗧𝗵𝗲 "𝗗𝗼𝗰𝗸𝗲𝗿 𝗦𝗵𝗶𝗺" 𝗥𝗲𝗺𝗼𝘃𝗮𝗹: A huge milestone in K8s history! Kubernetes officially removed Dockershim. ➣ 𝗪𝗵𝘆? Kubernetes doesn't need the full Docker Desktop suite. It only needs a "High-Level Runtime" to manage containers. ➣ 𝗧𝗵𝗲 𝗥𝗲𝘀𝘂𝗹𝘁: Kubernetes now talks directly to Containerd (via CRI), making the cluster lighter, faster, and more secure. • 𝗛𝗼𝘄 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝗱 𝗪𝗼𝗿𝗸𝘀 𝗶𝗻 𝗞𝟴𝘀: I mapped out the flow of how a container actually starts. When I run a pod, the Kubelet sends a request via the Container Runtime Interface (𝗖𝗥𝗜) to Containerd. Containerd then manages the image lifecycle and uses runC (𝘁𝗵𝗲 𝗹𝗼𝘄-𝗹𝗲𝘃𝗲𝗹 𝗿𝘂𝗻𝘁𝗶𝗺𝗲) to actually spawn the container process on the Linux kernel. 🛠️ 𝗛𝗮𝗻𝗱𝘀-𝗼𝗻 𝗟𝗮𝗯: 𝗧𝗮𝘀𝗸 𝗠𝗮𝗻𝗮𝗴𝗲𝗿 𝗔𝗽𝗽 𝗼𝗻 𝗮 𝗞𝘂𝗯𝗲𝗮𝗱𝗺 𝗖𝗹𝘂𝘀𝘁𝗲𝗿: I built a production-ready environment using the "Hard Way" (Kubeadm) to host a Task Manager application. 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻: Kubeadm (1 Master + Worker Nodes) 𝗥𝘂𝗻𝘁𝗶𝗺𝗲: Containerd (Configured with Systemd Cgroup driver) 𝗡𝗲𝘁𝘄𝗼𝗿𝗸𝗶𝗻𝗴: Calico CNI (for robust Pod-to-Pod communication) 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻: A multi-tier Task Manager app. 🔗 𝗚𝗶𝘁𝗛𝘂𝗯 𝗥𝗲𝗽𝗼: https://lnkd.in/gSBq9Q2Q #DevOps #Kubernetes #Kubeadm #Containerd #Docker #CloudNative #CNI #Calico #SoftwareEngineering #100DaysOfDevOps #Automation #TechLearning #InfrastructureAsCode #BackendEngineering
To view or add a comment, sign in
-
-
⌨️ Day 11 of #100DaysOfDevOps — Docker Commands. Knowing Docker is one thing. Knowing the exact commands to control it is another. Today I practiced every essential Docker command from installation to cleanup. Here is the full cheatsheet. 👇 🔷 What is a Docker Image? 1. Read-only template to create containers 2. Built from Dockerfile · stored in Registry 3. Blueprint → build once · use anywhere ✅ 🔷 What is a Docker Container? 1. Running instance of a Docker image 2. Isolated · lightweight · starts in seconds 3. Stop container → image still exists ✅ 🔷 Simple Analogy Image → Blueprint / Recipe Container → The actual building / Dish 🔷 Install Docker sudo apt install docker.io -y sudo systemctl enable docker sudo usermod -aG docker $USER 🔷 Version and Status docker --version → check version systemctl status docker → service status docker info → system info 🔷 Start · Stop · Restart Docker Service sudo systemctl start docker sudo systemctl stop docker sudo systemctl restart docker 🔷 Working with Images docker pull nginx → pull from Hub docker images → list all images docker rmi nginx → remove image docker image prune -a → remove unused 🔷 Run Containers docker run nginx → foreground docker run -d nginx → background docker run -d -p 8080:80 nginx → port mapping docker run -d --name my-nginx → custom name docker run -d -e ENV=prod nginx → set env var 🔷 List Containers docker ps → running only docker ps -a → all containers docker ps -q → running IDs only docker ps -aq → all container IDs 🔷 Stop · Start · Restart Containers docker stop my-nginx → stop docker start my-nginx → start docker restart my-nginx → restart docker stop $(docker ps -q) → stop ALL 🔷 Logs and Execute docker logs my-nginx → view logs docker logs -f my-nginx → live logs docker stats → resource usage docker exec -it my-nginx bash → enter container 🔷 Delete Containers and Images docker rm my-nginx → remove container docker rm -f my-nginx → force remove docker container prune → remove all stopped docker rmi nginx → remove image docker image prune -a → remove all unused docker system prune -a → remove everything 🔷 3 Mistakes Most Beginners Make ❌ docker rm on running container → always docker stop first ❌ Never cleaning up → disk fills silently → run docker system prune ❌ Forgetting -d flag → container blocks your terminal 🔷 3 Commands I Use the Most docker ps -a → first thing I check when troubleshooting docker logs -f → fastest way to debug a running container docker system prune -a → cleans everything instantly ✅ 📄 Want my Docker cheatsheet as a PDF? DM me. Which Docker command do you use the most? Drop it below 👇 #100DaysOfDevOps #Docker #Containers #DevOps #CICD #AWS #CloudEngineering #LTIMindtree #DockerCommands #Containerization
To view or add a comment, sign in
-
-
Spent days debugging a reactive microservice that kept going unresponsive in production. Heap dumps? Pod gets OOMKilled before the dump finishes. Thread dumps? Useless — requests hop across schedulers. APM traces? Fragmented at every publishOn() boundary. So we tried a different approach: mimic what dumps and profilers give you — thread identity, memory state, execution timing — but from inside the reactive chains themselves, as structured log output. The method is a simple loop: ① Instrument — add diagnostic logs at component boundaries ② Execute — run with real production-scale data ③ Analyze — find where time spikes, threads are wrong, or logs go silent ④ Fix or zoom in — resolve if clear, add finer logs if not, repeat We used AWS Kiro to accelerate each cycle — tracing call chains, adding instrumentation across files, analyzing log patterns, and applying fixes. What would have been days of manual investigation turned into a few short iterations. Wrote up the full methodology — no project-specific code, just the approach. Link in comments. #ReactiveProgramming #SpringWebFlux #Kubernetes #Performance #AWS #Kiro #Java #Microservices
To view or add a comment, sign in
-
🚀 Today I finally “connected the dots” on Kubernetes, Argo CD, and CNPG I’ve been working with Kubernetes, Argo CD, Helm, and CNPG for a while… but today things clicked at a deeper level. 🧠 The biggest realization At the end of the day: My database is just a container image… running inside a pod… on a node. That’s it. But the power comes from what’s built around it. ⚙️ The layers (finally clear) Kubernetes → runs pods on nodes Pods → wrap containers Containers → run actual processes (like Postgres) Argo CD → keeps everything in sync with Git CNPG → turns a simple Postgres image into a real database system 💡 The moment it made sense When I saw this flow clearly: Git → Argo CD → Kubernetes → Pods → Postgres container And CNPG sitting in the middle saying: “I’ll manage this database properly for you.” Replication, failover, secrets… all handled. 🔐 Then came the real issue: credentials This was the subtle mistake: storing DB credentials in Helm values duplicating secrets across apps mixing app config with database ownership The fix was simple, but important: Let CNPG generate the credentials → and let the app consume them directly from Kubernetes Secrets No more hardcoding. No more duplication. ⚠️ Another realization Using one database cluster for multiple apps is easy… But: You’re also grouping their failures together. So the real question becomes: Which apps am I okay taking down together? 🏗️ What I’m taking forward Treat the database as platform infrastructure, not just another service Keep clear ownership boundaries Use one DB/user per app Never put credentials in Git Understand the layers — don’t just use them 🔥 Final thought Kubernetes doesn’t make things more complex… It just makes the complexity visible. 💬 Curious When you deploy a database on Kubernetes… Do you really understand what’s running underneath? #Kubernetes #ArgoCD #GitOps #CloudNative #DevOps #PlatformEngineering #CloudNativePG #PostgreSQL #Helm #CI_CD #InfrastructureAsCode #Containers #Microservices #SystemDesign #SoftwareEngineering #BackendDevelopment #SRE #TechLearning #DevOpsJourney #ScalableSystems
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development