Below, I am sharing some practical insights that I have experienced. If you're new to Docker Hardened Images or experiencing the same issue as me, the information below will assist you. Please refer to the link below to view the available Docker hardened images :- https://lnkd.in/gy-QATjv DHI is utilized in production due to its ultra-lightweight nature and its foundation on Alpine and Debian Linux. The lightweight nature means it doesn't contain sh, apt, curl, sudo, or wget. For further details, check out the link below. https://lnkd.in/gXZ3Whk5 The command to access the DHI container terminal differs from the usual one of 'docker exec -it <unk> container-name> /bin/bash', but it is not effective. 'docker exec -it 'container-name> /bin/sh' is the command used for DHI. To use DHI, it is necessary to log in to 'dhi.io' using your system or cloud terminal, and then execute Compose, build, or pull the DHI image. Take a look at the other sections of the post that is similar to this one 1. https://lnkd.in/gNUXSCs7 2. https://lnkd.in/gHTYzUZ4 #Docker #DevOps #Containerization #LearnDevOps #coding [ Docker, DevOps, Containerization, LearnDevOps, coding ]
More Relevant Posts
-
Below, I am sharing some practical insights that I have experienced. If you're new to Docker Hardened Images or experiencing the same issue as me, the information below will assist you. Please refer to the link below to view the available Docker hardened images :- https://lnkd.in/gy-QATjv DHI is utilized in production due to its ultra-lightweight nature and its foundation on Alpine and Debian Linux. The lightweight nature means it doesn't contain sh, apt, curl, sudo, or wget. For further details, check out the link below. https://lnkd.in/gXZ3Whk5 The command to access the DHI container terminal differs from the usual one of 'docker exec -it <unk> container-name> /bin/bash', but it is not effective. 'docker exec -it 'container-name> /bin/sh' is the command used for DHI. To use DHI, it is necessary to log in to 'dhi.io' using your system or cloud terminal, and then execute Compose, build, or pull the DHI image. Take a look at the other sections of the post that is similar to this one 1. https://lnkd.in/gBX6cGyw 2. https://lnkd.in/gNUXSCs7 #Docker #DevOps #Containerization #LearnDevOps #coding [ Docker, DevOps, Containerization, LearnDevOps, coding ]
To view or add a comment, sign in
-
🚨 CI Build Success… But Snyk Scan Failed? Faced an interesting issue today in Azure DevOps 👇 ✔️ Docker image was built successfully ❌ But Snyk scan failed with: "SNYK-CLI-0000: Image does not exist for the current platform" At first, it looked like the image wasn’t available… but that wasn’t the real problem. --- 💡 Root Cause: 👉 Platform mismatch (amd64 vs arm64) The image existed, but Snyk couldn’t resolve it for the current platform. --- ✅ Fix: docker build --platform=linux/amd64 -t <image> . And in pipeline: env: DOCKER_DEFAULT_PLATFORM: linux/amd64 --- 🎯 Key Takeaway: Before debugging CI failures, always check: - Platform compatibility - Image tag correctness - Registry availability --- 💭 Small issues like this can consume hours if you don’t spot the pattern early. Sharing this so it saves someone else’s time 🙌 #DevOps #AzureDevOps #Docker #Snyk #CICD #Debugging #LearningInPublic
To view or add a comment, sign in
-
🏌️♂️𝗪𝗲𝗹𝗰𝗼𝗺𝗲 𝘁𝗼 𝗺𝘆 𝗨𝗣𝗦𝗞𝗜𝗟𝗟 2.0 𝗷𝗼𝘂𝗿𝗻𝗲𝘆 ~ "𝗕𝘂𝗶𝗹𝗱-𝗜𝘁, 𝗕𝗿𝗲𝗮𝗸-𝗜𝘁, 𝗙𝗶𝘅-𝗜𝘁" Turning chaos into clarity and mistakes into real-world engineering experience 💻 “𝙈𝙮 𝙘𝙤𝙙𝙚 𝙬𝙤𝙧𝙠𝙚𝙙 𝙤𝙣 𝙢𝙮 𝙢𝙖𝙘𝙝𝙞𝙣𝙚” 𝙞𝙨 𝙩𝙝𝙚 𝙘𝙤𝙧𝙥𝙤𝙧𝙖𝙩𝙚 𝙫𝙚𝙧𝙨𝙞𝙤𝙣 𝙤𝙛 “𝙏𝙝𝙚 𝙙𝙤𝙜 𝙖𝙩𝙚 𝙢𝙮 𝙝𝙤𝙢𝙚𝙬𝙤𝙧𝙠.” 🤡 Currently in my #learning phase — diving deep into Linux, Git, CI/CD, and Cloud/DevOps ☁️ And honestly… real learning starts when things break 😅 You haven’t truly started this journey until you’ve: 🔐 Broken Linux permissions and locked yourself out of a server ⚙️ Spent hours debugging a CI/CD pipeline because of a tiny YAML indentation 🐳 Faced container failures and Docker issues out of nowhere 🌐 Deployed something… and then realized it works only on localhost Fixed an issue and thought: “Okay… now I actually understand what’s happening under the hood.” Here’s what I’m realizing: 🏌️♂️You don’t master DevOps by just reading documentation 🏌️♂️ You master it by troubleshooting, breaking systems, and rebuilding them better 🏌️♂️ Every failed build, misconfigured pipeline, or network issue is part of the learning curve From understanding Linux internals to exploring cloud infrastructure, automation, and system design — this is just the beginning 🚀 #LearningJourney #DevOps #Linux #CloudEngineering #CI/CD #Upskill #KeepBuilding #GrowthMindset #TechHumor
To view or add a comment, sign in
-
-
🗓️ Day 39/100 — 100 Days of AWS & DevOps Challenge Today's task: a developer made changes inside a running container and wants to preserve that work as a new image. $ sudo docker commit ubuntu_latest beta:xfusion docker commit takes a snapshot of the container's current filesystem state — everything installed, created, or modified inside it — and captures it as a new image layer. The most important distinction to understand: - docker commit is NOT the production way to create images. It's the pragmatic way. Here's why it matters to know both: The right way for production: Dockerfile. Every instruction is a layer, every layer is documented, the whole thing lives in version control. Anyone can rebuild the image identically at any time. docker history my-image shows exactly how it was built. The right way for this scenario: docker commit. A developer has been working inside a container for hours — installed tools, configured things, made changes. They need a snapshot before something changes or before the container is removed. Writing a Dockerfile retroactively from memory isn't realistic. Commit captures exactly what exists right now. What docker commit does NOT capture: mounted volumes. Any data in volume-mounted directories lives on the host, not in the container's union filesystem, and is excluded from the commit. This catches people off guard when they commit a container running a database — the data files are in a volume, not in the image. Full breakdown + Q&A on GitHub 👇 https://lnkd.in/gPXMuD_X #DevOps #Docker #Containers #Linux #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #Containerization #Kubernetes #CICD
To view or add a comment, sign in
-
🚀 Day 6 of 14 days Docker Journey | Docker Networking (DevOps Series) 🔥 Continuing my 14-Day Docker Series, today I explored one of the most powerful concepts in containerization: 👉 Docker Networking 🧠 The Problem I Understood In real-world applications, we don’t run just one container… We have: Frontend Backend Database 💥 Question: How do these containers communicate with each other? 💡 The Solution: Docker Networks 👉 Docker allows containers to communicate using networks + internal DNS ✔ No need to remember IP addresses ✔ Just use container names 🛠️ Hands-on I Performed ✔ Created my own custom network: docker network create mynet ✔ Ran multiple containers in same network ✔ Connected containers using names (not IPs) ✔ Tested communication: ping mongodb 💥 Successfully connected one container to another 🔥 🧠 Extra Learning (Self Exploration) Went deeper into: ✔ Types of Docker networks (bridge, host, none, overlay, macvlan) ✔ Difference between default vs custom bridge ✔ Internal vs external communication 🎯 Real DevOps Insight 👉 Docker Networking is the foundation of: Microservices architecture Multi-container applications Scalable systems 💬 If you're on a DevOps journey, let’s connect and grow together! #Docker #DevOps #LearningInPublic #CloudComputing #AWS #Networking #Linux #Containers #TechJourney #BuildInPublic
To view or add a comment, sign in
-
-
🚀 From 1.5 GB → 50 MB Docker Image (95% Reduction) 🐳 I recently reduced my Docker image size from 1.5 GB to just 50 MB — that’s a 95% improvement. And honestly? This wasn’t about advanced tricks… it was about doing the basics consistently. ⚠️ Why this matters: Oversized images = ❌ Slower deployments ❌ Higher storage costs ❌ Bigger attack surface 👉 Lean containers aren’t optional in DevOps — they’re a discipline. 🔧 7 Practices I Follow in Every Build: 1️⃣ Use minimal base images Alpine or slim variants cut hundreds of MB instantly. 2️⃣ Multi-stage builds = must-have Build tools stay in one stage, final image stays clean. 3️⃣ Install only what’s needed Every extra package = unnecessary risk + size. 4️⃣ Clean cache in the SAME layer Otherwise, Docker still keeps the junk. 5️⃣ Chain RUN commands Fewer layers = smaller images. 6️⃣ Use a .dockerignore file Keep out node_modules, .git, logs, env files. 7️⃣ Never run as root Simple step → big security win. #Docker #DevOps #CloudEngineering #AWS #Containers #Linux #DevOpsJourney #90DaysOfDevOps
To view or add a comment, sign in
-
-
𝗜 𝗿𝗲𝗱𝘂𝗰𝗲𝗱 𝗮 𝗗𝗼𝗰𝗸𝗲𝗿 𝗶𝗺𝗮𝗴𝗲 𝗳𝗿𝗼𝗺 𝟭.𝟱 𝗚𝗕 ➡ 𝟱𝟬 𝗠𝗕 (𝟵𝟱.𝟮% 𝘀𝗺𝗮𝗹𝗹𝗲𝗿). 𝗛𝗲𝗿𝗲'𝘀 𝗵𝗼𝘄: Bloated images slow down deployments, eat storage, and create security risks. Keeping containers lean is one of the most practical skills in DevOps. 𝟳 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 𝗜 𝗳𝗼𝗹𝗹𝗼𝘄: 1. 𝗨𝘀𝗲 𝘀𝗺𝗮𝗹𝗹 𝗯𝗮𝘀𝗲 𝗶𝗺𝗮𝗴𝗲𝘀 — Alpine or slim variants instead of full OS images. Immediately cuts hundreds of MBs. 2. 𝗠𝘂𝗹𝘁𝗶-𝘀𝘁𝗮𝗴𝗲 𝗯𝘂𝗶𝗹𝗱𝘀 — build in one stage, copy only the final artifact. Dev tools never make it into production. 3. 𝗜𝗻𝘀𝘁𝗮𝗹𝗹 𝗼𝗻𝗹𝘆 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 — every extra package adds size and attack surface. Be strict in production. 4. 𝗖𝗹𝗲𝗮𝗻 𝗰𝗮𝗰𝗵𝗲 𝗮𝗳𝘁𝗲𝗿 𝗶𝗻𝘀𝘁𝗮𝗹𝗹𝘀 — remove cache in the same RUN command so the layer stays lean. 5. 𝗥𝗲𝗱𝘂𝗰𝗲 𝗗𝗼𝗰𝗸𝗲𝗿 𝗹𝗮𝘆𝗲𝗿𝘀 — chain commands with && so each step doesn't create a new layer. 6. 𝗨𝘀𝗲 .𝗱𝗼𝗰𝗸𝗲𝗿𝗶𝗴𝗻𝗼𝗿𝗲 — keeps node_modules, .git, logs, and local configs out of your image context. 7. 𝗗𝗼𝗻'𝘁 𝗿𝘂𝗻 𝗮𝘀 𝗿𝗼𝗼𝘁 — create a dedicated user. Minimal privileges = better security posture. These are not advanced tricks — they're fundamentals. But most beginners skip them. I'm actively applying these while building real 𝗗𝗼𝗰𝗸𝗲𝗿 𝗮𝗻𝗱 𝗗𝗲𝘃𝗢𝗽𝘀 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀. Every image I ship, I ask: is this as lean as it can be? Which of these do you already use? 𝗗𝗿𝗼𝗽 𝗶𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 👇 #Docker #DevOps #Linux #Containers #CloudEngineering #AWS #DevOpsJourney #90daysofdevops #Docker
To view or add a comment, sign in
-
-
This visual does a great job of communicating how to reduce Docker image size in a simple and engaging way. The “before vs after” comparison (1.5 GB → 50 MB, 95.2% smaller) is especially effective and immediately highlights the impact of optimization. The design is clean and modern, and the use of illustrations makes a technical topic more approachable. The key techniques listed—Alpine base, multi-stage builds, .dockerignore, avoiding root user, and layer caching are all relevant best practices, which adds real value.
DevOps Engineer | Automating Cloud Infrastructure with AWS, Docker & Kubernetes | CI/CD (Jenkins & GitHub Actions) | Terraform | Linux | Open to DevOps & Cloud Roles
𝗜 𝗿𝗲𝗱𝘂𝗰𝗲𝗱 𝗮 𝗗𝗼𝗰𝗸𝗲𝗿 𝗶𝗺𝗮𝗴𝗲 𝗳𝗿𝗼𝗺 𝟭.𝟱 𝗚𝗕 ➡ 𝟱𝟬 𝗠𝗕 (𝟵𝟱.𝟮% 𝘀𝗺𝗮𝗹𝗹𝗲𝗿). 𝗛𝗲𝗿𝗲'𝘀 𝗵𝗼𝘄: Bloated images slow down deployments, eat storage, and create security risks. Keeping containers lean is one of the most practical skills in DevOps. 𝟳 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀 𝗜 𝗳𝗼𝗹𝗹𝗼𝘄: 1. 𝗨𝘀𝗲 𝘀𝗺𝗮𝗹𝗹 𝗯𝗮𝘀𝗲 𝗶𝗺𝗮𝗴𝗲𝘀 — Alpine or slim variants instead of full OS images. Immediately cuts hundreds of MBs. 2. 𝗠𝘂𝗹𝘁𝗶-𝘀𝘁𝗮𝗴𝗲 𝗯𝘂𝗶𝗹𝗱𝘀 — build in one stage, copy only the final artifact. Dev tools never make it into production. 3. 𝗜𝗻𝘀𝘁𝗮𝗹𝗹 𝗼𝗻𝗹𝘆 𝘄𝗵𝗮𝘁 𝘆𝗼𝘂 𝗻𝗲𝗲𝗱 — every extra package adds size and attack surface. Be strict in production. 4. 𝗖𝗹𝗲𝗮𝗻 𝗰𝗮𝗰𝗵𝗲 𝗮𝗳𝘁𝗲𝗿 𝗶𝗻𝘀𝘁𝗮𝗹𝗹𝘀 — remove cache in the same RUN command so the layer stays lean. 5. 𝗥𝗲𝗱𝘂𝗰𝗲 𝗗𝗼𝗰𝗸𝗲𝗿 𝗹𝗮𝘆𝗲𝗿𝘀 — chain commands with && so each step doesn't create a new layer. 6. 𝗨𝘀𝗲 .𝗱𝗼𝗰𝗸𝗲𝗿𝗶𝗴𝗻𝗼𝗿𝗲 — keeps node_modules, .git, logs, and local configs out of your image context. 7. 𝗗𝗼𝗻'𝘁 𝗿𝘂𝗻 𝗮𝘀 𝗿𝗼𝗼𝘁 — create a dedicated user. Minimal privileges = better security posture. These are not advanced tricks — they're fundamentals. But most beginners skip them. I'm actively applying these while building real 𝗗𝗼𝗰𝗸𝗲𝗿 𝗮𝗻𝗱 𝗗𝗲𝘃𝗢𝗽𝘀 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀. Every image I ship, I ask: is this as lean as it can be? Which of these do you already use? 𝗗𝗿𝗼𝗽 𝗶𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 👇 #Docker #DevOps #Linux #Containers #CloudEngineering #AWS #DevOpsJourney #90daysofdevops #Docker
To view or add a comment, sign in
-
-
😩 I installed … and thought I became DevOps engineer overnight. Reality slapped me 💥 👉 Containers not running 👉 Ports not working 👉 “Why localhost not opening???” 😭 I almost gave up. Then I realized something simple: 🚨 Docker is NOT hard. Your basics are weak. So I stopped crying… and started fixing 👇 ✔ Learned what a container actually is ✔ Understood ports (not just copy-paste) ✔ Ran simple commands again and again And suddenly… things started working. Lesson: 👉 Tools don’t make you skilled 👉 Understanding does You don’t need 100 tools. You need clarity on 1 tool. Now my rule: Learn less. Understand more. Build daily. 💻🔥 👇 If this hit you: Like 👍 | Follow 🔔 | Repost 🔁 #docker #devops #linux #learning #buildinpublic #beginners
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development