🚀 From “docker run” to Mastering Containers — Day 29 & 30 of My DevOps Journey For a long time, Docker felt like magic. day29:[https://lnkd.in/dbz6jhcA] day30:[https://lnkd.in/dFczzYdj] You type: docker run nginx And suddenly… a web server is running. But what actually happens behind the scenes? Over the last 2 days, I stopped just running containers — and started understanding how they truly work. 📦 Day 29 – Docker Fundamentals ✔ What containers really are ✔ Containers vs Virtual Machines (the real difference) ✔ Docker architecture (Client → Daemon → Image → Container → Registry) ✔ Ran Nginx in browser ✔ Explored Ubuntu container interactively ✔ Managed container lifecycle basics The biggest realization? Containers don’t virtualize hardware. They virtualize the OS layer. That’s why they’re lightweight. That’s why they’re fast. That’s why modern DevOps runs on them. 🧠 Day 30 – Images & Container Lifecycle This is where things got serious. ✔ Pulled nginx, ubuntu, alpine ✔ Compared image sizes (Alpine ≈ 5MB 😳) ✔ Explored image layers using docker image history ✔ Understood layer caching & optimization ✔ Practiced full container lifecycle: Create → Start → Pause → Stop → Restart → Kill → Remove ✔ Inspected containers for IP, ports, mounts ✔ Cleaned up Docker disk usage Now I understand: 🔹 Images are layered, read-only templates 🔹 Containers are running instances 🔹 Layers make builds faster 🔹 Caching reduces CI/CD build time 🔹 Cleanup prevents disk bloat 💡 Why This Matters Every modern system today uses: • CI/CD pipelines • Kubernetes • Microservices • Cloud-native deployments And they all start with Docker. If you don’t understand images & lifecycle, you don’t truly understand modern deployment. 🔥 What Changed For Me Before: “I can run Docker.” Now: “I understand how Docker works internally.” That shift is powerful. Day 29 ✅ Day 30 ✅ DevOps consistency > motivation. On to Dockerfiles next 🚀 #DevOps #Docker #CloudComputing #Linux #Containers #LearningInPublic #100DaysOfDevOps #BuildInPublic #TechJourney #DevOpsKaJosh #TrainWithShubham
Ajay Guhade’s Post
More Relevant Posts
-
From Commands to Infrastructure: My First End-to-End Docker System: Introduction - Most tutorials stop at: - Running a container - Listing images But real systems don’t stop there. So I pushed further. What I Actually Did (End-to-End) 1. Setup & First Container - Installed Docker - Ran Ubuntu container 👉 Entry into containerized environments 2. Observability & Debugging - docker ps, images - docker inspect - docker logs 👉 Learned how to see inside systems 3. State Transformation - Restarted containers - Used docker commit 👉 Converted runtime → reusable image 4. Portability - Exported image using docker save 👉 System became a portable artifact (.tar) 5. Deep System Visibility - Used htop Saw: - dockerd - containerd - shim processes 👉 Containers = Linux processes + isolation 6. Networking (The Breakthrough) - docker network create batch42 - docker run -d --name web1 --network batch42 nginx - docker run -it --name client1 --network batch42 busybox sh 👉 This is where everything clicked. Now: - Containers can talk to each other - Systems are no longer isolated - You’ve built a mini distributed system 7. Resource Management - docker system df - docker system prune - docker image prune -a 👉 Managing system lifecycle = real DevOps The Real Mental Model. This is not a list of commands. This is a system: 🔁 Lifecycle: Image ↓ Container ↓ Modified State ↓ New Image ↓ Portable Artifact ↓ Connected System (Network) ↓ Observed & Debugged ↓ Cleaned & Optimized The Big Insight Docker is built on 3 pillars: 1. State - Images, containers, commits 2. Communication - Networks, service interaction 3. Portability - Save, share, deploy anywhere Final Thought The moment you: - Connect containers - Inspect processes - Export environments - You stop learning Docker. - You start understanding infrastructure. What I’ll Explore Next Docker Compose (multi-service systems) Volumes & persistence Deployment on cloud If you're learning Docker: 👉 Don’t stop at docker run 👉 Build a system That’s where real clarity begins. #Docker #DevOps #Cloud #CloudDevopsHub #VikasRanawat
To view or add a comment, sign in
-
🚀 BUILDING IN PUBLIC | PART-2 | How I Built code quality gates and kubernetes in CI/CD Pipeline — And What It Taught Me A few months ago, the term "CI/CD pipeline" felt intimidating. Today, I can build one from scratch. Here's what I learned 👇 What is a CI/CD Pipeline? It's the backbone of modern software delivery — automating the journey of code from a developer's laptop to a live production environment, without manual intervention. The stages I learned to build: 🔹 Source — Code pushed to GitHub triggers everything. No push, no pipeline. 🔹 Build — The code gets compiled, dependencies installed, Docker image created. 🔹 Test — Automated tests run. If they fail, the pipeline stops. No broken code moves forward. 🔹 Deploy — The image gets pushed to a registry and deployed to the target environment — whether that's a cloud server or a Kubernetes cluster. Tools I got hands-on with: → Git & GitHub for version control → Docker for containerization → Jenkins / GitHub Actions for automation → Kubernetes for orchestration → Linux as the foundation for everything The biggest lesson? CI/CD isn't just a tool — it's a mindset. Ship small, ship fast, catch errors early. Every failed pipeline taught me more than a successful one ever did. 📌 This is Part 3 of my DevOps learning series. Part 2 is coming soon — Monitoring & Observability. I'll be covering Prometheus, Grafana, alerting, and how to actually know when your system is breaking before your users do. Follow along if you're on the same journey 🙌 Drop a comment — are you also learning DevOps? Let's connect! #DevOps #CICD #CloudComputing #Kubernetes #Docker #Linux #AWS #LearningInPublic #DevOpsEngineer #CloudEngineer
To view or add a comment, sign in
-
-
🚀 From 20-minute manual deploys to 2-minute automation: My Week 4 DevOps Journey Four weeks ago, I was manually SSH'ing into servers, running Docker commands, and hoping nothing would break. Today, I just push code and watch it deploy automatically to production. This is the power of CI/CD, and I built it from scratch. What I shipped: A complete CI/CD pipeline that automatically builds, tests, and deploys my portfolio site to AWS with every commit. The stack: GitHub Actions (orchestration) Docker Buildx (optimized builds) AWS EC2 (production server) Bash + SSH (deployment automation) The Real Impact: ⚡ Before: 15-20 minutes per deployment ⚡ After: 2 minutes fully automated ⚡ Build time: 70% faster with caching ⚡ Manual steps: Zero But Here's What I'm Most Proud Of: I didn't just follow tutorials. I built everything manually first, hit every wall, and debugged every error. When variables didn't expand in SSH context? I learned about execution contexts and heredoc. When cache didn't work? I dove deep into Docker layers and understood Buildx. The Approach That Made the Difference: I chose to write everything in bash instead of using pre-built GitHub Actions. Why make it harder on myself? Because when things break in production (and they will), I know EXACTLY what's happening and how to fix it. No black boxes. No magic. Just solid understanding. Key Technical Wins: ✅ Docker Buildx with GitHub Actions cache (type=gha) ✅ SSH automation with heredoc for remote execution ✅ Multi-tag strategy for easy rollbacks ✅ Secrets management done right Four weeks ago, I was nervous about SSH'ing into a server. Today, I have a pipeline that automates it all. The difference? Consistent daily learning and building in public. Learning in Public: I'm documenting everything, the wins, the failures, the "why did I think that would work?" moments. Complete code and detailed notes are all on my GitHub. Want to follow along? I'm sharing every step of this journey. #DevOps #CICD #Docker #AWS #GitHubActions #LearningInPublic #CloudComputing #TechCareer #100DaysOfCode #SoftwareEngineering
To view or add a comment, sign in
-
-
Day 14 – Running Your First Docker Container Day 14 of my 30-Day DevOps learning journey. Today I focused on running my first Docker container and understanding how applications run inside containers. What is a Docker Container? A Docker container is a lightweight, portable environment that includes: Application code Runtime System libraries Dependencies This ensures the application runs the same on any system. Steps to Run Your First Container 1. Pull an Image from Docker Hub Docker Hub is the public registry where Docker images are stored. docker pull nginx 2. Run the Container docker run -d -p 80:80 nginx Explanation: -d → Run container in background -p 80:80 → Map container port to host port Now the application runs inside a container. Check Running Containers docker ps This command lists all running containers. Stop a Container docker stop <container-id> Why Containers Matter in DevOps • Faster deployments • Consistent environments • Easy scaling • Works perfectly with CI/CD pipelines Containers make it easier to move applications from development → testing → production without compatibility issues. Tomorrow: Docker Images & Dockerfile – How containers are built. Do follow me for more content on DevOps. Please checkout my GitHub Repo and give your suggestion i have created basic projects - https://lnkd.in/gXTYxXXm A Special thanks to Shubham Londhe & Abhishek Veeramalla for the guidance and the tutorials. #DevOps #Docker #Containers #CICD #CloudComputing #AWS #Jenkins #Kubernetes #Linux #DevOpsEngineer #TechLearning
To view or add a comment, sign in
-
-
🚀 Day 3 of My DevOps Journey — Docker (Where Things Got Real) After Linux and Git, today I stepped into Docker — and this is where everything started to feel like real DevOps. 🔹 What I Practiced: 👉 Running containers using docker run 👉 Understanding images vs containers 👉 Port mapping (-p 8080:80) 👉 Naming containers 👉 Viewing logs using docker logs 👉 Inspecting containers (docker inspect) 🔹 Mini Project: I deployed an NGINX container locally: ✔ Pulled image from Docker Hub ✔ Ran container on custom port ✔ Verified using browser & curl ✔ Checked logs and container status 🔹 Real Issue I Faced: ❌ “Port already allocated” error This happened because: Another container was already using the same port. 🔹 How I Fixed It: ✔ Identified running containers (docker ps) ✔ Stopped conflicting container ✔ Re-ran with a different port 💡 Key Learning: “Docker is not just about running containers — it’s about managing environments.” Now I understand: ➡️ Why containers are lightweight ➡️ How DevOps teams ensure consistency across environments ➡️ Why Docker is used in CI/CD pipelines Next → Building custom images using Dockerfile 🔥 If you’re learning Docker or struggled with it, let’s connect 🤝 #DevOps #Docker #Containers #Cloud #AWS #LearningInPublic #BuildInPublic #CI_CD
To view or add a comment, sign in
-
🚀 Learning Docker & Containerization I recently completed hands-on tutorials on Docker, exploring how containerization simplifies application development and deployment. Special thanks to Piyush Garg for the amazing tutorials that helped me understand Docker from basics to advanced concepts. 🔹 Topics I covered • Problem statement and need for containerization • Installation of Docker CLI & Docker Desktop • Understanding Images vs Containers • Running Ubuntu images inside containers • Working with multiple containers • Port mappings • Environment variables 🔹 Dockerizing a Node.js Application • Writing a Dockerfile • Understanding caching layers • Publishing images to Docker Hub 🔹 Docker Compose • Services • Port mapping • Environment variables 🔹 Advanced Concepts • Docker Networking (Bridge & Host) • Volume Mounting • Efficient caching in layers • Docker Multi-Stage Builds 🔹 Commands I practiced docker --version docker pull ubuntu docker run -it ubuntu docker ps docker images docker build -t myapp . docker compose up Excited to continue learning more about DevOps, containers, and cloud technologies. #Docker #DevOps #Containerization #NodeJS #CloudComputing #LearningJourney
To view or add a comment, sign in
-
Still naming files like Final_v2-lastRealUse. zip I did that for months. Thought I was being organized. Then I joined a team project. Three developers. One codebase. No Git. Within a week: → We overwrote each other’s work → No idea which version actually worked → A “final” file that nobody trusted That’s when Git stopped being a buzzword — and became a lifeline. Here’s what changed everything: Git doesn’t just save your files. 👉 It saves your entire history. Every change. Every version. Every mistake — and how to undo it. And GitHub takes it further: Store it in the cloud → collaborate with your team → trigger CI/CD pipelines → deploy to production. That’s not just version control. 👉 That’s the backbone of modern DevOps. In this carousel, I’ve broken down: • What Git & GitHub actually are (and why they’re different) • Key differences between the two • The 8 commands that cover 80% of daily Git usage • How Git fits into real DevOps & Cloud workflows If you’re learning DevOps, this is where it starts. 👉 No Git = No DevOps. Simple as that. 💬 What was the first Git command you learned — and did you actually understand it at the time? #Git #GitHub #DevOps #VersionControl
To view or add a comment, sign in
-
Day 28 of #90DaysOfDevOps — Revision & Reflection A few days ago, I dedicated time to review everything I learned in the first 27 days of my DevOps journey. Instead of learning new concepts, this day was about strengthening the fundamentals and identifying areas that need more practice. Topics Revised :- 🔹 DevOps & Cloud Basics — SDLC, DevOps culture, cloud fundamentals 🔹 Linux Fundamentals — filesystem, processes, systemd, troubleshooting 🔹 Users & Permissions — managing users, groups, and file permissions 🔹 LVM & Networking — storage management, DNS, IP, ports, connectivity checks 🔹 Shell Scripting — variables, loops, functions, automation scripts 🔹 Git & GitHub — branching, merging, rebasing, stash, reset, revert 🔹 GitHub CLI & Profile Branding What I Focused On :- ✔ Self-assessment of Linux, Shell scripting, and Git skills ✔ Revisiting topics where I needed more clarity ✔ Answering quick-fire DevOps questions from memory ✔ Organizing and verifying all work from Day 1 – Day 27 in my GitHub repository 💡 Key Takeaway In DevOps, strong fundamentals are more important than rushing into new tools. Taking time to revise and practice ensures long-term understanding. 🔗 GitHub Repository https://lnkd.in/gn7iU4KF Documented my revision notes here: 📂 https://lnkd.in/gh6Rx-uf The journey continues toward more automation, containers, and infrastructure tools ahead. #DevOps #90DaysOfDevOps #Linux #Git #Automation #LearningInPublic #CloudComputing #OpenSource #DevOpsJourney #TrainwithShubham
To view or add a comment, sign in
-
Day 36 - Nginx Container Deployment #100DaysOfDevOps🧑💻 Day 36 of my #100DaysOfDevOps focused on container deployment using Docker. In this task, I deployed an Nginx container on Application Server using the lightweight "nginx:alpine image" and ensured the container was running successfully. Working with minimal container images like Alpine is a common production best practice because it reduces the attack surface and keeps deployments efficient. Exercises like this reinforce the fundamentals of containerized application deployment and infrastructure consistency across environments. I documented the full step-by-step solution and commands in my GitHub repository for reference and reproducibility. Looking forward to building on this momentum and tackling Day 37 as the journey deeper into DevOps continues.💪 Documentation: https://lnkd.in/dm5UehcZ #DevOps #Docker #Containers #Nginx #CloudComputing #InfrastructureAsCode #PlatformEngineering #DevOpsJourney #TechLearning
To view or add a comment, sign in
-
Docker vs Podman (Understanding the Difference in Container Technology) While working with containerization tools, I explored the architectural and operational differences between Docker and Podman. Both tools are powerful for running containers, but they follow different design philosophies and security approaches. 🔍 Key Takeaways: • Architecture Docker uses a daemon-based architecture (dockerd) to manage containers, while Podman is daemonless and follows a fork-exec model. • Security Docker typically requires root privileges, whereas Podman supports rootless containers by default, improving system security. • Container Management Docker focuses on individual containers, while Podman introduces pods, a concept aligned with Kubernetes. • Image Building Docker builds images natively using Dockerfiles, while Podman relies on external tools like Buildah. • Orchestration Docker supports Docker Swarm, while Podman integrates better with Kubernetes workflows. 💡 Conclusion: Docker remains widely adopted and beginner-friendly, while Podman offers enhanced security and Kubernetes-native capabilities. Learning both tools provides a deeper understanding of modern cloud-native and DevOps ecosystems. #Docker #Podman #Containers #DevOps #Kubernetes #CloudComputing #Linux #OpenSource #Containerization #TechLearning
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development