🚀 From 1.5 GB → 50 MB Docker Image (95% Reduction) 🐳 I recently reduced my Docker image size from 1.5 GB to just 50 MB — that’s a 95% improvement. And honestly? This wasn’t about advanced tricks… it was about doing the basics consistently. ⚠️ Why this matters: Oversized images = ❌ Slower deployments ❌ Higher storage costs ❌ Bigger attack surface 👉 Lean containers aren’t optional in DevOps — they’re a discipline. 🔧 7 Practices I Follow in Every Build: 1️⃣ Use minimal base images Alpine or slim variants cut hundreds of MB instantly. 2️⃣ Multi-stage builds = must-have Build tools stay in one stage, final image stays clean. 3️⃣ Install only what’s needed Every extra package = unnecessary risk + size. 4️⃣ Clean cache in the SAME layer Otherwise, Docker still keeps the junk. 5️⃣ Chain RUN commands Fewer layers = smaller images. 6️⃣ Use a .dockerignore file Keep out node_modules, .git, logs, env files. 7️⃣ Never run as root Simple step → big security win. #Docker #DevOps #CloudEngineering #AWS #Containers #Linux #DevOpsJourney #90DaysOfDevOps
Reduce Docker Image Size by 95% with 7 Best Practices
More Relevant Posts
-
🗓️ Day 39/100 — 100 Days of AWS & DevOps Challenge Today's task: a developer made changes inside a running container and wants to preserve that work as a new image. $ sudo docker commit ubuntu_latest beta:xfusion docker commit takes a snapshot of the container's current filesystem state — everything installed, created, or modified inside it — and captures it as a new image layer. The most important distinction to understand: - docker commit is NOT the production way to create images. It's the pragmatic way. Here's why it matters to know both: The right way for production: Dockerfile. Every instruction is a layer, every layer is documented, the whole thing lives in version control. Anyone can rebuild the image identically at any time. docker history my-image shows exactly how it was built. The right way for this scenario: docker commit. A developer has been working inside a container for hours — installed tools, configured things, made changes. They need a snapshot before something changes or before the container is removed. Writing a Dockerfile retroactively from memory isn't realistic. Commit captures exactly what exists right now. What docker commit does NOT capture: mounted volumes. Any data in volume-mounted directories lives on the host, not in the container's union filesystem, and is excluded from the commit. This catches people off guard when they commit a container running a database — the data files are in a volume, not in the image. Full breakdown + Q&A on GitHub 👇 https://lnkd.in/gPXMuD_X #DevOps #Docker #Containers #Linux #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #Containerization #Kubernetes #CICD
To view or add a comment, sign in
-
𝗜 𝗿𝗲𝗱𝘂𝗰𝗲𝗱 𝗮 𝗗𝗼𝗰𝗸𝗲𝗿 𝗶𝗺𝗮𝗴𝗲 𝗳𝗿𝗼𝗺 𝟭.𝟱 𝗚𝗕 ➡ 𝟱𝟬 𝗠𝗕 (𝟵𝟱.𝟮% 𝘀𝗺𝗮𝗹𝗹𝗲𝗿). 𝗛𝗲𝗿𝗲'𝘀 𝗵𝗼𝘄: Bloated images slow down deployments, eat storage, and create security risks. Keeping containers lean is one of the most practical skills in DevOps. 𝟳 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 𝗜 𝗳𝗼𝗹𝗹𝗼𝘄: 1. 𝗨𝘀𝗲 𝘀𝗺𝗮𝗹𝗹 𝗯𝗮𝘀𝗲 𝗶𝗺𝗮𝗴𝗲𝘀 — Alpine or slim variants instead of full OS images. Immediately cuts hundreds of MBs. 2. 𝗠𝘂𝗹𝘁𝗶-𝘀𝘁𝗮𝗴𝗲 𝗯𝘂𝗶𝗹𝗱𝘀 — build in one stage, copy only the final artifact. Dev tools never make it into production. 3. 𝗜𝗻𝘀𝘁𝗮𝗹𝗹 𝗼𝗻𝗹𝘆 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 — every extra package adds size and attack surface. Be strict in production. 4. 𝗖𝗹𝗲𝗮𝗻 𝗰𝗮𝗰𝗵𝗲 𝗮𝗳𝘁𝗲𝗿 𝗶𝗻𝘀𝘁𝗮𝗹𝗹𝘀 — remove cache in the same RUN command so the layer stays lean. 5. 𝗥𝗲𝗱𝘂𝗰𝗲 𝗗𝗼𝗰𝗸𝗲𝗿 𝗹𝗮𝘆𝗲𝗿𝘀 — chain commands with && so each step doesn't create a new layer. 6. 𝗨𝘀𝗲 .𝗱𝗼𝗰𝗸𝗲𝗿𝗶𝗴𝗻𝗼𝗿𝗲 — keeps node_modules, .git, logs, and local configs out of your image context. 7. 𝗗𝗼𝗻'𝘁 𝗿𝘂𝗻 𝗮𝘀 𝗿𝗼𝗼𝘁 — create a dedicated user. Minimal privileges = better security posture. These are not advanced tricks — they're fundamentals. But most beginners skip them. I'm actively applying these while building real 𝗗𝗼𝗰𝗸𝗲𝗿 𝗮𝗻𝗱 𝗗𝗲𝘃𝗢𝗽𝘀 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀. Every image I ship, I ask: is this as lean as it can be? Which of these do you already use? 𝗗𝗿𝗼𝗽 𝗶𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 👇 #Docker #DevOps #Linux #Containers #CloudEngineering #AWS #DevOpsJourney #90daysofdevops #Docker
To view or add a comment, sign in
-
-
This visual does a great job of communicating how to reduce Docker image size in a simple and engaging way. The “before vs after” comparison (1.5 GB → 50 MB, 95.2% smaller) is especially effective and immediately highlights the impact of optimization. The design is clean and modern, and the use of illustrations makes a technical topic more approachable. The key techniques listed—Alpine base, multi-stage builds, .dockerignore, avoiding root user, and layer caching are all relevant best practices, which adds real value.
DevOps Engineer | Automating Cloud Infrastructure with AWS, Docker & Kubernetes | CI/CD (Jenkins & GitHub Actions) | Terraform | Linux | Open to DevOps & Cloud Roles
𝗜 𝗿𝗲𝗱𝘂𝗰𝗲𝗱 𝗮 𝗗𝗼𝗰𝗸𝗲𝗿 𝗶𝗺𝗮𝗴𝗲 𝗳𝗿𝗼𝗺 𝟭.𝟱 𝗚𝗕 ➡ 𝟱𝟬 𝗠𝗕 (𝟵𝟱.𝟮% 𝘀𝗺𝗮𝗹𝗹𝗲𝗿). 𝗛𝗲𝗿𝗲'𝘀 𝗵𝗼𝘄: Bloated images slow down deployments, eat storage, and create security risks. Keeping containers lean is one of the most practical skills in DevOps. 𝟳 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 𝗜 𝗳𝗼𝗹𝗹𝗼𝘄: 1. 𝗨𝘀𝗲 𝘀𝗺𝗮𝗹𝗹 𝗯𝗮𝘀𝗲 𝗶𝗺𝗮𝗴𝗲𝘀 — Alpine or slim variants instead of full OS images. Immediately cuts hundreds of MBs. 2. 𝗠𝘂𝗹𝘁𝗶-𝘀𝘁𝗮𝗴𝗲 𝗯𝘂𝗶𝗹𝗱𝘀 — build in one stage, copy only the final artifact. Dev tools never make it into production. 3. 𝗜𝗻𝘀𝘁𝗮𝗹𝗹 𝗼𝗻𝗹𝘆 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 — every extra package adds size and attack surface. Be strict in production. 4. 𝗖𝗹𝗲𝗮𝗻 𝗰𝗮𝗰𝗵𝗲 𝗮𝗳𝘁𝗲𝗿 𝗶𝗻𝘀𝘁𝗮𝗹𝗹𝘀 — remove cache in the same RUN command so the layer stays lean. 5. 𝗥𝗲𝗱𝘂𝗰𝗲 𝗗𝗼𝗰𝗸𝗲𝗿 𝗹𝗮𝘆𝗲𝗿𝘀 — chain commands with && so each step doesn't create a new layer. 6. 𝗨𝘀𝗲 .𝗱𝗼𝗰𝗸𝗲𝗿𝗶𝗴𝗻𝗼𝗿𝗲 — keeps node_modules, .git, logs, and local configs out of your image context. 7. 𝗗𝗼𝗻'𝘁 𝗿𝘂𝗻 𝗮𝘀 𝗿𝗼𝗼𝘁 — create a dedicated user. Minimal privileges = better security posture. These are not advanced tricks — they're fundamentals. But most beginners skip them. I'm actively applying these while building real 𝗗𝗼𝗰𝗸𝗲𝗿 𝗮𝗻𝗱 𝗗𝗲𝘃𝗢𝗽𝘀 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀. Every image I ship, I ask: is this as lean as it can be? Which of these do you already use? 𝗗𝗿𝗼𝗽 𝗶𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 👇 #Docker #DevOps #Linux #Containers #CloudEngineering #AWS #DevOpsJourney #90daysofdevops #Docker
To view or add a comment, sign in
-
-
👉 Using docker inspect only for containers? Explore what else you can inspect ✨Most engineers use docker ps… But when things break, that’s not enough. 👉 That’s where docker inspect becomes your superpower. 🔍 What is docker inspect? ⚡docker inspect gives you low-level (deep internal) details of Docker objects in JSON format. ✨Not just containers 👇 ⭐You can inspect: 1 Containers 2 Images 3 Networks 4 Volumes 5 Services (Swarm) 6 Nodes ⚙️ Real Commands You Should Know a) docker inspect <container_id> b) docker inspect <image_id> c) docker inspect <network_name> d) docker inspect <volume_name> 💡 What You Can Actually See ✔ Container IP address ✔ Environment variables ✔ Mounted volumes ✔ Network configuration ✔ Entrypoint & command ✔ Labels & metadata ✔ Container state (running, exited, etc.) 🔥 Pro Tips ✔ Use --format to extract specific values ✨ docker inspect -f '{{.State.Status}}' my_container ✔ Pipe to jq for readability ✨ docker inspect my_container | jq ✔ Best command for debugging & troubleshooting ⚠️ Important Insight 👉 docker inspect shows low-level details raw internal data from Docker Engine (not simplified output) 👉 Output can be large & complex, so filter wisely 🎯 When to Use It ✔ Debug failing containers ✔ Check networking issues ✔ Verify volume mounts ✔ Investigate environment configs 🔥 Pro Insight If you’re not using docker inspect, you’re debugging blindly. #Docker #DevOps #Kubernetes #CloudComputing #SRE #Containerization #Microservices #DevOpsEngineer #CloudNative #Linux #TechLearning #Debugging #PlatformEngineering #AWS #Azure #GCP
To view or add a comment, sign in
-
-
👉 Using docker inspect only for containers? Explore what else you can inspect ✨Most engineers use docker ps… But when things break, that’s not enough. 👉 That’s where docker inspect becomes your superpower. 🔍 What is docker inspect? ⚡docker inspect gives you low-level (deep internal) details of Docker objects in JSON format. ✨Not just containers 👇 ⭐You can inspect: 1 Containers 2 Images 3 Networks 4 Volumes 5 Services (Swarm) 6 Nodes ⚙️ Real Commands You Should Know a) docker inspect <container_id> b) docker inspect <image_id> c) docker inspect <network_name> d) docker inspect <volume_name> 💡 What You Can Actually See ✔ Container IP address ✔ Environment variables ✔ Mounted volumes ✔ Network configuration ✔ Entrypoint & command ✔ Labels & metadata ✔ Container state (running, exited, etc.) 🔥 Pro Tips ✔ Use --format to extract specific values ✨ docker inspect -f '{{.State.Status}}' my_container ✔ Pipe to jq for readability ✨ docker inspect my_container | jq ✔ Best command for debugging & troubleshooting ⚠️ Important Insight 👉 docker inspect shows low-level details raw internal data from Docker Engine (not simplified output) 👉 Output can be large & complex, so filter wisely 🎯 When to Use It ✔ Debug failing containers ✔ Check networking issues ✔ Verify volume mounts ✔ Investigate environment configs 🔥 Pro Insight If you’re not using docker inspect, you’re debugging blindly. #Docker #DevOps #Kubernetes #CloudComputing #SRE #Containerization #Microservices #DevOpsEngineer #CloudNative #Linux #TechLearning #Debugging #PlatformEngineering #AWS #Azure #GCP
To view or add a comment, sign in
-
-
🚨 Kubernetes Core Architecture — If You Don’t Get This, You’re Guessing 🚨 Most people say they “know” Kubernetes… but all they really do is run kubectl commands. That’s not understanding — that’s memorizing shortcuts. If you don’t understand what’s happening behind the scenes, you’re just hoping things work. Here’s the ONE mental model you actually need 👇 🧠 Kubernetes = Brain vs Muscle 🔥 Control Plane (The Brain) This is where all decisions are made: • API Server → the front door (everything goes through this) • Scheduler → decides which node runs your Pod • Controller Manager → keeps fixing things until desired = actual • etcd → stores the entire cluster state (your source of truth) 👉 If this goes down, your cluster is basically dead. ⚙️ Worker Nodes (The Muscle) This is where your applications actually run: • Kubelet → connects node to control plane • Container Runtime → runs containers (containerd/Docker) • Pods → smallest unit where your app lives 👉 If these fail, apps crash — but cluster still exists. 🌐 Networking (The Part Everyone Ignores… Until It Breaks) • Pods communicate over cluster network • Services expose Pods (internally + externally) • DNS makes everything discoverable 👉 If you don’t get this, debugging will destroy you. ⚠️ Reality Check If you can’t: • Explain how a Pod is scheduled • Trace request → Service → Pod • Tell what happens when a node dies Then you don’t understand Kubernetes. You’re just using it blindly. 💡 What Actually Matters (Focus Here) 1. Pod lifecycle 2. Scheduling flow 3. Service routing 4. Node communication 5. Failure handling 🧩 Mental Model Kubernetes is just a “Desired State Engine” You say: “I want 3 Pods running” Kubernetes says: “Done. And I’ll keep fixing it if anything breaks.” #kubernetes #devops #cloudcomputing #k8s #docker #container #backenddeveloper #softwareengineering #linux #cloudnative #aws #azure #gcp #microservices #programming #techcontent
To view or add a comment, sign in
-
-
Bloated images slow deployments, eat storage, and quietly expand your attack surface. Keeping containers lean isn't an advanced skill — it's a fundamental one most beginners skip. 🐳I reduced a Docker image from 1.5 GB to 72 MB — a 95% reduction. Here's exactly how. These are the 7 practices I follow on every image I ship: 1. 🏔️𝗦𝘁𝗮𝗿𝘁 𝘄𝗶𝘁𝗵 𝗮 𝘀𝗺𝗮𝗹𝗹 𝗯𝗮𝘀𝗲 𝗶𝗺𝗮𝗴𝗲 ➡️ Alpine or slim variants instead of full OS images. This single change can cut hundreds of MBs before you write a single line of your own code. 2. 🏗️𝗨𝘀𝗲 𝗺𝘂𝗹𝘁𝗶-𝘀𝘁𝗮𝗴𝗲 𝗯𝘂𝗶𝗹𝗱𝘀➡️ Build in one stage, copy only the final artifact to the next. Your compiler, test runner, and dev tools never touch production. 3. 🎯𝗜𝗻𝘀𝘁𝗮𝗹𝗹 𝗼𝗻𝗹𝘆 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱➡️ Every extra package adds size and attack surface. Be ruthless in production environments. 4.🧹𝗖𝗹𝗲𝗮𝗻 𝗰𝗮𝗰𝗵𝗲 𝗶𝗻 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗥𝗨𝗡 𝗰𝗼𝗺𝗺𝗮𝗻𝗱➡️ If you remove cache in a separate layer, Docker has already committed the bloat. Chain it with && or it doesn't count. 5. 🔗𝗥𝗲𝗱𝘂𝗰𝗲 𝗹𝗮𝘆𝗲𝗿𝘀 𝗯𝘆 𝗰𝗵𝗮𝗶𝗻𝗶𝗻𝗴 𝗰𝗼𝗺𝗺𝗮𝗻𝗱𝘀➡️ Each RUN instruction creates a new layer. Combine related commands to keep your image history clean and compact. 6. 🚫𝗔𝗱𝗱 𝗮 .𝗱𝗼𝗰𝗸𝗲𝗿𝗶𝗴𝗻𝗼𝗿𝗲 𝗳𝗶𝗹𝗲➡️ node_modules, .git, logs, local configs — none of it belongs in your build context. This file is the first thing I set up. 7. 🔒𝗡𝗲𝘃𝗲𝗿 𝗿𝘂𝗻 𝗮𝘀 𝗿𝗼𝗼𝘁➡️ Create a dedicated user with minimal privileges. It's a small change with a meaningful security payoff. #Docker #DevOps #Linux #Containers #CloudEngineering #AWS #Docker
To view or add a comment, sign in
-
-
🗓️ Day 38/100 — 100 Days of AWS & DevOps Challenge Today: pull an image, give it a new tag. Two commands. $ sudo docker pull busybox:musl $ sudo docker tag busybox:musl busybox:blog Same Image ID. That's the detail worth understanding. docker tag doesn't copy anything. It creates a new pointer to the same underlying image layers. Both busybox:musl and busybox:blog share the same 1.41MB of storage — tagging is free in terms of disk space. You can have 50 tags on the same image and it still only occupies the storage of one image. Why this matters in CI/CD pipelines: This is exactly how image promotion works in production. A build produces myapp:build-456. Tests pass. The pipeline re-tags it: $ docker tag myapp:build-456 myapp:staging $ docker tag myapp:build-456 myapp:latest No new image is created. No layers are duplicated. The same image — tested and verified — now carries multiple tags that represent its promotion status. When production needs a rollback: $ docker tag myapp:build-455 myapp:latest One command. The previous build is live again. Because tags are just labels. One more concept worth knowing: tags are mutable. busybox:latest today might be a different image tomorrow when the maintainer updates it. If you need to pin to a specific image permanently, use the digest: $ docker pull busybox@sha256:abc123def... A digest is immutable — it always refers to the exact same image layers regardless of when or where it's used. For production deployments, digests over tags. Full tagging guide + Q&A on GitHub 👇 https://lnkd.in/gvUtPawg #DevOps #Docker #Containers #CICD #Linux #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #ContainerRegistry #Kubernetes
To view or add a comment, sign in
-
Below, I am sharing some practical insights that I have experienced. If you're new to Docker Hardened Images or experiencing the same issue as me, the information below will assist you. Please refer to the link below to view the available Docker hardened images :- https://lnkd.in/gy-QATjv DHI is utilized in production due to its ultra-lightweight nature and its foundation on Alpine and Debian Linux. The lightweight nature means it doesn't contain sh, apt, curl, sudo, or wget. For further details, check out the link below. https://lnkd.in/gXZ3Whk5 The command to access the DHI container terminal differs from the usual one of 'docker exec -it <unk> container-name> /bin/bash', but it is not effective. 'docker exec -it 'container-name> /bin/sh' is the command used for DHI. To use DHI, it is necessary to log in to 'dhi.io' using your system or cloud terminal, and then execute Compose, build, or pull the DHI image. Take a look at the other sections of the post that is similar to this one 1. https://lnkd.in/gBX6cGyw 2. https://lnkd.in/gNUXSCs7 #Docker #DevOps #Containerization #LearnDevOps #coding [ Docker, DevOps, Containerization, LearnDevOps, coding ]
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development