Day 12/30 – Docker Learning Series Docker Exec and Interactive Containers Today I explored how to interact with running containers, which is an essential skill for debugging and managing applications in Docker. Running a container is not always enough. In real-world scenarios, we often need to go inside a container to inspect files, check processes, or troubleshoot issues. --- What is docker exec? The docker exec command is used to run commands inside a running container. Basic syntax: docker exec <container_id> <command> --- Open Interactive Terminal Inside a Container docker exec -it <container_id> /bin/bash Explanation: -i → Interactive mode -t → Allocates a terminal /bin/bash → Opens a shell inside the container If bash is not available (like in Alpine images), use: docker exec -it <container_id> /bin/sh --- Example Run an Nginx container: docker run -d --name mynginx nginx Enter the container: docker exec -it mynginx /bin/bash Now you are inside the container and can run Linux commands. --- Run One-Time Commands Inside Container docker exec mynginx ls /usr/share/nginx/html This runs a command without opening a full terminal. --- What are Interactive Containers? Interactive containers allow you to interact directly with the container’s shell. Example: docker run -it ubuntu /bin/bash This starts a container and immediately opens a terminal. --- Exit from Container Type: exit This will close the container session. --- Key Takeaways • docker exec allows access to running containers • Useful for debugging and inspecting applications • Interactive mode helps simulate real server environments • Essential skill for troubleshooting in DevOps Being able to enter and inspect containers is critical when working with production systems. --- Day 12/30 – Docker Learning Series Next: Dockerfile Introduction and Writing Your First Dockerfile #Docker #DevOps #Containerization #CloudComputing #CICD #Infrastructure #SRE #LearningInPublic #TechLearning #NetworkToDevOps
Docker Exec and Interactive Containers Explained
More Relevant Posts
-
Docker — What I've learned so far: Most beginners confuse containers with images. Here's the simplest way to understand it: → Image = Blueprint → Container = A running instance of that blueprint Once this clicked, everything else started making sense: 1. Containers You don't install apps directly on your machine anymore. You run them inside isolated containers. Clean. Portable. Consistent. 2. Images ↔ Containers You can create an image from a container (docker commit). You can spin up a container from an image (docker run). This two-way flow is what makes Docker powerful. 3. Docker Hub Think of it as GitHub but for Docker images. You push your custom images. You pull others' images. One command and your environment is ready anywhere. 4. Repositories Every image lives inside a repository. Versioning, tagging, organizing — all handled here. Currently also learning Bash scripting alongside Docker because automation without shell scripting is incomplete. Docker handles the "what to run." Bash handles the "how to automate it." Together, they're a solid foundation for anyone stepping into DevOps. Still learning. Still building. #Docker #Bash #DevOps #Linux #Containerization #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Day 7 of My 14 Days Docker Journey | Real DevOps Project: Log Monitoring System 🔥 After learning Docker fundamentals (Images, Dockerfile, Volumes, Networking), I built my first real DevOps-style project 💪 🧠 💡 Project: Log Monitoring System (Docker) In real-world systems, applications generate logs continuously. So I built a mini system where: 👉 One container generates logs 👉 Another container monitors logs in real-time 🧩 Architecture App Container → Volume → Viewer Container ✔ Shared storage using Docker Volumes ✔ Real-time log streaming using tail -f ✔ Multi-container communication 🛠️ What I Used ✔ Dockerfile (custom images) ✔ Docker Volumes (data persistence) ✔ Docker Networking (container communication) ✔ Linux scripting 🔥 Key Learning 💥 Containers are temporary, but data can persist using volumes 💥 Real-world systems separate: Log generation Log monitoring ⚡ Challenges I Faced ❌ Container execution error (exec ./app.sh) ❌ File format issues (Linux vs Windows) ✔ Debugged using: docker logs docker exec Container inspection 👉 This was a huge learning moment 🔥 🎯 Outcome ✔ Built a working multi-container system ✔ Logs generated & streamed in real-time ✔ Strong understanding of Docker internals GitHub Repo Link : https://lnkd.in/gXp7sPR6 💬 If you're learning DevOps, let’s connect & grow together! #Docker #DevOps #LearningInPublic #CloudComputing #AWS #Linux #Containers #BuildInPublic #TechJourney
To view or add a comment, sign in
-
-
Would you like to know when you'd get rich? Here's a little bash script (thanks NetworkChuck Academy) that might help. Create an executable file, call it getrichquick.sh and paste the following content inside: #!/bin/bash # echo "What is your name?" read name sleep 1 echo "How old are you?" read age sleep 1 echo "Hello,$name, your are $age years old." rich_age=$((($RANDOM%11) + $age)) echo "I'm looking into your future..." echo "Calculating..." echo "......." sleep 1 echo "**....." sleep 1 echo "****..." sleep 1 echo "******." echo "Hello,$name, you will be $rich_age when you become rich!" Save getrichquick.sh and run it. It's a fun little exercise. I tried it on my wife and expression was priceless. The exercise was a nice introduction into the $RANDOM variable in bash. $RANDOM variable helps to generate random numbers from 0 - 32767. This is useful in many DEVOPS tasks combined with the modulo operator (%). $(($RANDOM % 10)) simply means divide a random number from 0 - 32767 and give the remainder. The aim of the operation is to generate numbers from 0 - 9. If you are interested in integers from 1-10, then the operation becomes... $(($RANDOM + 1)) Devops use cases Random server selection for Load Balancing purpose Here's an example randomly choosing a server among three choices: servers=("server1" "server2" "server3") index=$(($RANDOM % 3)) echo "Deploying to ${servers[$index]}" Result... Deploying to server2 This is used for - Load balancing - Blue-green deployments - Canary deployments #DevOps #Linux #Bash #ShellScripting #Automation #CloudEngineering #DevOpsLearning #TechLearning #InfrastructureAsCode #PlatformEngineering #DeveloperTips #CodingTips
To view or add a comment, sign in
-
I used to think docker run nginx is just one simple command. But then I asked myself: 👉 What actually happens after I hit Enter? And honestly… that question changed how I understand systems. Because behind that one command — there’s an entire workflow happening in seconds 👇 So I built something to visualize it. 💡 An interactive simulation that shows: → How Docker CLI talks to the daemon → How images are pulled from the registry → How containers are actually created → What really happens inside the Linux kernel And I didn’t stop there — I documented the complete flow step-by-step. 📌 What you’ll understand after this: ✔ Real Docker architecture (not just theory) ✔ Role of containerd & runc ✔ How namespaces & cgroups actually work ✔ What’s really running inside a container 💻 Interactive simulation link 👉 https://lnkd.in/dT72PNgC 📄 Full PDF guide links In Comments👇 This is not just learning Docker. This is understanding how systems actually work. If you're preparing for DevOps or UpSkilling— this perspective makes a huge difference. Would love your feedback 🙌 #Docker #DevOps #CloudComputing #Kubernetes #SoftwareEngineering #LearningInPublic #SystemDesign #LearnwithHarinesh
To view or add a comment, sign in
-
🐳 Docker Swarm in Action — From Zero to a Running Cluster on My Local Machine Container orchestration is one of those topics that is much easier to understand when you actually build it yourself rather than just reading about it. So I did exactly that — I set up a full Docker Swarm cluster locally on my MacBook and documented every step with real terminal outputs. 📌 What I built and tested: ✅ A 3-node cluster (1 Manager + 2 Workers) simulated using Docker-in-Docker ✅ Custom bridge network so containers communicate by name — not by IP ✅ Deployed a replicated nginx service distributed across all 3 nodes ✅ Scaled from 3 → 6 replicas with a single command ✅ Killed a worker node and watched Swarm self-heal automatically ✅ Force-rebalanced the cluster after the node recovered ✅ Cleaned up all services, containers and networks completely 📌 Key things I learned: → Docker Swarm is built into Docker — zero extra installation → Custom networks give you DNS-based discovery between containers → Self-healing is fully automatic; rebalancing after recovery needs a manual trigger → Always clean up your environment after practice — remove services before stopping nodes → You can practice a production-grade cluster setup entirely on a single laptop I have compiled everything — concepts, architecture diagrams, all commands, and real terminal output screenshots — into a structured PDF guide attached to this post. Swipe through it if you find it useful. 👆 I hope this helps anyone learning Docker, DevOps, or getting started with container orchestration. Feel free to save this post for reference and share it with someone who might find it useful. 🙌 #Docker #DockerSwarm #DevOps #Containers #CloudNative #KnowledgeSharing #LearningInPublic #SoftwareEngineering #SRE #Linux
To view or add a comment, sign in
-
🐳 Docker Basics Made Simple: Named Volume vs Anonymous Volume Understanding Docker storage is a must for anyone in DevOps 🚀 Here’s a quick breakdown 👇 🔹 Named Volume ✔ Created with a specific name ✔ Easy to manage and reuse ✔ Ideal for production environments ✔ Example: docker run -d -v mydata:/app ubuntu 🔹 Anonymous Volume ✔ No name (auto-generated by Docker) ✔ Hard to track and reuse ✔ Mostly used for temporary data ✔ Example: docker run -d -v /app ubuntu ⚖️ Key Difference 👉 Named volumes are persistent and reusable 👉 Anonymous volumes are temporary and harder to manage ⚠️ Interview Tip Anonymous volumes are NOT automatically deleted when containers are removed — they can consume space if not cleaned up! 🧹 Cleanup command: docker volume prune 💡 Pro Tip Use named volumes in production and anonymous volumes for quick testing. #Docker #DevOps #CloudComputing #SRE #Containers #Learning #TechTips
To view or add a comment, sign in
-
Day 6 of #DevOpsJourneyToHired 🐳 Today's Focus: Docker Fundamentals + GitOps Project Roadmap 🐳 Docker Deep Dive: Started with container basics - the backbone of modern DevOps • What containers are and why they matter • Docker architecture: images, containers, registries • Writing Dockerfiles: FROM, RUN, COPY, CMD, EXPOSE • Container lifecycle: build, run, stop, remove • Docker networking and volumes basics 💡 Why Docker first? Before building a GitOps platform, I need to master containers. Can't orchestrate what you don't understand. ArgoCD deploys containerized apps - so Docker knowledge is non-negotiable. 📋 GitOps IDP Project Roadmap Created: **Phase 1: Foundation (Week 1-2)** → Docker mastery + basic Kubernetes → Set up local K8s cluster (Minikube/Kind) → Create sample microservices **Phase 2: GitOps Core (Week 3-4)** → ArgoCD installation and configuration → Git repository structure for IaC → Automated sync and deployment workflows **Phase 3: Developer Portal (Week 5-6)** → Backstage setup and customization → Service catalog with templates → Documentation integration **Phase 4: Enterprise Features (Week 7-8)** → Multi-environment support (dev/staging/prod) → RBAC and security policies → Monitoring and observability dashboard 🔄 Revision Work: Reviewed Days 1-5 concepts: • Linux fundamentals ✓ • Networking basics ✓ • AWS services ✓ • Shell scripting concepts ✓ 📊 Progress Update: Learning streak: 6 days ✅ Docker exercises completed: 5 Project roadmap: Defined and documented Applications sent: 17 total 🎯 Tomorrow: Hands-on Docker practice - building and deploying containers What's your Docker learning journey been like? #DevOps #Docker #Containers #GitOps #ProjectPlanning #LearningInPublic #ArgoCD
To view or add a comment, sign in
-
-
Episode 10 of my journey to becoming a DevOps Engineer 🚀 In this episode, I’m diving into Docker and containerization. Before containerization, we relied heavily on virtual machines (VMs) to run multiple applications or services on a single server or PC. However, each VM requires its own operating system, which makes them heavy, slower to boot, and resource-intensive. To solve these challenges, containerization emerged. 1. In 2006, Cgroups were introduced 2. In 2008, LXC (Linux Containers) came along 3. In 2013, Docker was released — and it quickly became the most popular containerization platform Containers are lightweight because they share the host OS kernel. This means: 1. Faster startup times ⚡ 2. Better resource efficiency 💻 3. Reduced costs (time, infrastructure, and maintenance) 💰 🔧 Docker Runtime The runtime responsible for creating and managing containers is called containerd. The core server-side engine of Docker is known as dockerd (Docker daemon). 📦 Key Docker Components 1. Dockerfile – A script used to build Docker images 2. Image – A blueprint or snapshot of a container 3. Container – A running instance of an image 4. Volume – Persistent storage for containers 5. Network – Enables communication between containers Command for installing docker: sudo apt update sudo apt install docker.io sudo usermod -aG docker $USER sudo reboot For downloading an image: docker pull <image_name>:latest For running a container: docker run <image_name>:latest To execute something inside a running container docker exec -it <container_id> <what you want to execute> #AWS #Python #DevOps #Debugging #Learning #Programming #PDB #VSCode #CloudEngineering #CICD #Linux #GitHub #Git #bongoDev #Networking #InfrastructureAsCode #DevOpsJourney #CloudComputing #LearningInPublic
To view or add a comment, sign in
-
🚀 Just finished the Docker course on Boot.dev! 🚀 I’m excited to share that I’ve learned the fundamentals of Docker—a key technology in modern DevOps and CI/CD pipelines. Docker makes it simple and fast to deploy new versions of code by packaging applications and their dependencies into preconfigured environments. This not only speeds up deployment, but also reduces overhead and eliminates the “it works on my machine” problem. Docker is a core part of the CI/CD (Continuous Integration/Continuous Deployment) process, enabling teams to deliver software quickly and reliably. Here’s a high-level overview of a typical CI/CD deployment process: The Deployment Process: 1. The developer (you) writes some new code 2. The developer commits the code to Git 3. The developer pushes a new branch to GitHub 4. The developer opens a pull request to the main branch 5. A teammate reviews the PR and approves it (if it looks good) 6. The developer merges the pull request 7. Upon merging, an automated script, perhaps a GitHub action, is started 8. The script builds the code (if it's a compiled language) 9. The script builds a new docker image with the latest program 10. The script pushes the new image to Docker Hub 11. The server that runs the containers, perhaps a Kubernetes cluster, is told there is a new version 12. The k8s cluster pulls down the latest image 13. The k8s cluster shuts down old containers as it spins up new containers of the latest image This process ensures that new features and fixes can be delivered to users quickly, safely, and consistently. image credit: Boot.dev Docker course #docker #cicd #devops #softwaredevelopment #bootdev #learning
To view or add a comment, sign in
-
-
📢 DevOps - Step By Step Learning: Docker Containerization & App Build Tools 📢 Do you want to know and learn "DevOps Fundamentals" from a software professional point of view? I am happy to share my writings about "DevOps—Step By Step Learning." In this post, I'd like to share the docker containerization and build tools in a nutshell. Please check out the following blogs and feel free to share your feedback in the comments. - Part 22 (Introduction of Containerization Tool Docker): https://lnkd.in/gu-z44bZ - Part 23 (Docker Hands On): https://lnkd.in/g7pMpsS8 - Part 24 (Docker Compose & Docker Swarm Hands On): https://lnkd.in/gisKV2eQ - Part 25 (Docker Networking With Help Of Linux Namespace): https://lnkd.in/gBv4qfFR - Part 26 (Build Tools and Its Necessity): https://lnkd.in/gq-US5rn Feel free to share with ones who you think can benefit from it. #learning #sharingknowledge #medium #blog #programming #software #engineering #devOps #docker #container #containerization
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development