400MB → 2GB. Six months. Zero code changes. CI: 3min → 20min. One Dockerfile. All of it. ANSWER: (D) Multi-stage builds Before (1.5–2GB): FROM node:22 # 1.1GB base COPY . . RUN npm install # ships dev deps too After (250–350MB): FROM node:22-alpine AS builder WORKDIR /app COPY package*.json . RUN npm ci --only=production COPY . . FROM node:22-alpine WORKDIR /app COPY --from=builder /app/node_modules . COPY --from=builder /app/src . CMD ["node", "src/app.js"] 75% smaller. One change. 5 FIXES RANKED BY IMPACT: 1️⃣ Multi-stage — 50–75% smaller 2️⃣ Alpine base — saves 900MB 3️⃣ Prod deps only — saves 300MB 4️⃣ .dockerignore — saves GBs 5️⃣ Layer ordering — faster CI DIAGNOSE: docker history myimage:latest docker run --rm -it wagoodman/dive myimage:latest Ship fast. Then fix the Dockerfile. Biggest image you inherited? 👇 #Docker #DevOps #CICD #30DaysOfDevOps
Roba Nath Basnet’s Post
More Relevant Posts
-
Every line in a Dockerfile is a deliberate decision. Most people write them without knowing why. A Dockerfile is not a shell script. It is a set of immutable, cached, layered instructions that build a reproducible image. Understanding the difference changes how you write them. Let me walk through the decisions that matter most. FROM node:14 This is not just "I need Node." It is your entire foundation. The base image determines what OS, what shell, what system libraries your container inherits. Choose it deliberately. ENV NODE_ENV=production Bake configuration into the image at build time so the container needs no external setup at runtime. This is the opposite of configuration drift. WORKDIR /usr/src/app Every subsequent instruction resolves paths relative to this. It keeps your container organized and your COPY commands predictable. Here is the most important ordering insight most developers miss: COPY package*.json ./ RUN npm install --production COPY . . Why copy package.json first, install, then copy the rest of the code? Because of Docker's layer cache. 🧠 Docker caches each instruction as a layer. If a layer's inputs have not changed, it reuses the cache and skips execution. Dependencies (package.json) change rarely. Code changes constantly. By copying them separately, you ensure that npm install only reruns when your dependencies actually change. Swap the order and you reinstall node_modules on every single code change. On a large project, that is minutes wasted per build. HEALTHCHECK CMD curl -fs http://localhost:$PORT || exit 1 This is not for your benefit. It is for Kubernetes. Orchestrators use health checks to decide whether to route traffic to a container. A container that starts but serves errors is worse than one that never starts. USER node Drop root privileges before the process starts. A container running as root with a vulnerability can escape to the host. This line costs nothing. Skipping it costs potentially everything. The Dockerfile is not boilerplate. Every line is architecture. What is the most counterintuitive Dockerfile practice you have come across? #Docker #Dockerfile #DevOps #SoftwareEngineering #Containers #BackendDevelopment #CloudNative #ContinuousDelivery #Security
To view or add a comment, sign in
-
-
🐳 𝐂𝐌𝐃 𝐯𝐬 𝐄𝐍𝐓𝐑𝐘𝐏𝐎𝐈𝐍𝐓 𝐢𝐧 𝐃𝐨𝐜𝐤𝐞𝐫 — 𝐨𝐯𝐞𝐫𝐰𝐫𝐢𝐭𝐞 𝐯𝐬 𝐚𝐩𝐩𝐞𝐧𝐝. Two instructions. Both define what runs when a container starts. But they behave very differently. 🔹𝐂𝐌𝐃 — 𝐨𝐯𝐞𝐫𝐰𝐫𝐢𝐭𝐞𝐬 𝐜𝐨𝐦𝐩𝐥𝐞𝐭𝐞𝐥𝐲. ➡️CMD defines the default command when the container starts. But pass anything at runtime and CMD is completely overwritten. Your new command takes over entirely — the original is gone. Think of it as a default setting on your phone. It works until you decide to change it. 🔹𝐄𝐍𝐓𝐑𝐘𝐏𝐎𝐈𝐍𝐓 — 𝐚𝐩𝐩𝐞𝐧𝐝𝐬, 𝐧ot 𝐨𝐯𝐞𝐫𝐰𝐫𝐢𝐭𝐞𝐬. ➡️ENTRYPOINT defines a fixed command that always runs. Whatever you pass at runtime does not overwrite it — it gets appended to it as an argument. Think of it as the application itself. You can give it different inputs but you cannot swap the application out. 🔹𝐄𝐍𝐓𝐑𝐘𝐏𝐎𝐈𝐍𝐓 + 𝐂𝐌𝐃 𝐭𝐨𝐠𝐞𝐭𝐡𝐞𝐫 — 𝐭𝐡𝐞 𝐬𝐰𝐞𝐞𝐭 𝐬𝐩𝐨𝐭. ➡️This is where it gets powerful. ENTRYPOINT holds the fixed command. CMD holds the default argument. At runtime you can overwrite the CMD argument freely while ENTRYPOINT stays untouched — only appending whatever you pass. This is the pattern you will see most in production Dockerfiles. 𝐎𝐧𝐞 𝐥𝐢𝐧𝐞 𝐬𝐮𝐦𝐦𝐚𝐫𝐲: 𝐂𝐌𝐃= overwritten entirely when you pass a command at runtime 𝐄𝐍𝐓𝐑𝐘𝐏𝐎𝐈𝐍𝐓= not overwritten — runtime input is always appended to it Huge thanks to Varun Joshi for an incredibly clear and practical explanation of this concept. The way he breaks makes everything click instantly. Highly recommend. 🙌 #Docker #CKA #Kubernetes #DevOps #LearningInPublic #Containers
To view or add a comment, sign in
-
-
🚀 Day 2/5 of learning Docker Advanced I used to think a Dockerfile is just a set of instructions… 👉 But it’s actually a layered build system with caching And this changed how I approach builds completely. ⸻ 🧱 What happens during docker build? Each instruction: ✔️ Creates a new layer ✔️ Gets cached (if unchanged) So Docker doesn’t rebuild everything every time. ⸻ ❌ Mistake I used to make: COPY . . RUN npm install 👉 Any small code change = dependencies reinstall again. Better Approach: COPY package.json . RUN npm install COPY . . ✔️ Dependency layer gets cached ✔️ Faster rebuilds ✔️ Efficient CI/CD pipelines ⸻ 💡 Key realization: Docker build performance depends on layer ordering 👉 Order your Dockerfile like: 1️⃣ Base image 2️⃣ System dependencies 3️⃣ App dependencies 4️⃣ Application code (last) ⸻ 🔥 Small changes, big impact: ✔️ Use .dockerignore ✔️ Combine RUN commands ✔️ Avoid unnecessary packages ✔️ Choose lightweight base images ⸻ Now I don’t just write Dockerfiles 👉 I design them for performance Because: Slow builds = slow pipelines = slow teams ⸻ #Docker #DevOps #CI #Containers #LearningInPublic
To view or add a comment, sign in
-
-
CI/CD Pipeline Explained Simply (For Developers) Many developers hear CI/CD all the time, but understanding it clearly makes a big difference in real projects. 🔹 What is CI/CD? CI/CD = Continuous Integration + Continuous Deployment It means: Automatically test your code and automatically deploy it after every push 🔹 How CI (Auto Check) Works When you push code to GitHub, tools like GitHub Actions start working automatically. Behind the scenes: Your project has a config file (.yml) It contains commands like the following: Run tests Check errors Build a project. Example: php artisan test Flow: Code Push → CI Runs → Tests Execute → Pass / Fail ✔ If tests fail → You fix code ✔ If tests pass → Move to next step 🔹 How CD (Auto Deploy) Works After CI passes: CD automatically pushes your code to the server. It can: Upload files to the server. Run migrations Restart services Flow: CI Passed → Deploy Script Runs → Live Server Updated 🔹Complete Flow Code → Push → Test → Build → Deploy 🔹 Why CI/CD Matters ✔ No manual testing every time ✔ No manual deployment ✔ Faster development ✔ Fewer bugs in production ✔ Professional workflow 🔹 Simple Analogy CI = Teacher checking your paper CD = Submitting it automatically Once you start using CI/CD, you can’t go back to manual workflows. It saves time and makes your development process much more reliable. What tools are you using for CI/CD? Drop your thoughts below. 👇 #CI #CD #DevOps #WebDevelopment #Laravel #GitHub #SoftwareEngineering
To view or add a comment, sign in
-
-
Understanding Docker Compose – Image Flow Made Simple Ever wondered what happens behind the scenes when you run docker compose up? Here’s a simplified breakdown. 🔹 1. Define Services Everything starts with a docker-compose.yml file where you define services, images, networks, volumes, and environment variables. 🔹 2. Compose Reads Configuration Docker Compose reads the YAML file and understands how your application is structured. 🔹 3. Pull Images If images (from Docker Hub or other registries) are not available locally, they are pulled automatically. 🔹 4. Create Resources Compose sets up: Networks (for container communication) Volumes (for persistent storage) 🔹 5. Start Containers All defined services (like web, database, cache) are started as containers. 🔹 6. Application is Live 🎉 Containers communicate over the network, and your multi-service application runs seamlessly. 💡 Key Takeaway: With Docker + Docker Compose, you can manage complex multi-container applications with a single command — making development, testing, and deployment much easier. #Docker #DevOps #Microservices #SoftwareEngineering #Containerization
To view or add a comment, sign in
-
-
We had a simple problem. Or at least, it looked simple. 𝗧𝗵𝗲 𝗰𝗼𝗱𝗲 𝘄𝗮𝘀 𝘄𝗼𝗿𝗸𝗶𝗻𝗴 𝗽𝗲𝗿𝗳𝗲𝗰𝘁𝗹𝘆 𝗼𝗻 𝗺𝘆 𝗺𝗮𝗰𝗵𝗶𝗻𝗲. I pushed it. It broke in production. At first, we thought it was a bug. Then we checked logs. Then configs. Then dependencies. Hours passed. The issue? 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁𝘀. On my machine: • Node version was slightly different • Some libraries were cached • Environment variables were set locally • OS behavior was slightly different In production: Everything was “correct.” But not the same. That’s when you realize something uncomfortable: - The problem is not your code. - The problem is your environment. This is the problem Docker solves. Docker doesn’t just run your application. It packages: • Your code • Your runtime • Your dependencies • Your system libraries • Your configurations Into a container. So instead of saying: “It works on my machine” You say: “It runs exactly the same everywhere.” Now development, testing, and production all use the same environment. No hidden differences. No silent mismatches. 𝗕𝘂𝘁 𝗵𝗲𝗿𝗲’𝘀 𝘁𝗵𝗲 𝗱𝗲𝗲𝗽𝗲𝗿 𝗶𝗻𝘀𝗶𝗴𝗵𝘁: Docker is not just about containers. It’s about removing uncertainty. Before Docker: Environment = unpredictable variable After Docker: Environment = controlled input That changes how systems are built. You can: • Spin up environments instantly • Scale services consistently • Deploy without surprises • Isolate services cleanly • Reproduce bugs exactly And most importantly: You stop debugging “why is this different?” And start focusing on actual problems. Docker didn’t just fix deployments. It fixed trust between environments. Because in real systems: 𝗖𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝘆 𝗶𝘀 𝗺𝗼𝗿𝗲 𝘃𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝘁𝗵𝗮𝗻 𝘀𝗽𝗲𝗲𝗱. #Docker #DevOps #BackendEngineering #SystemDesign #SoftwareEngineering #AkashGautam
To view or add a comment, sign in
-
-
🐳 🐙 Docker Compose Tip #55: docker compose config advanced usage Most people use it for validation. It does much more! ```bash # List resources docker compose config --services docker compose config --volumes docker compose config --profiles # Change detection for CI caching docker compose config --hash='*' # Pin image digests for production docker compose config --resolve-image-digests > compose.resolved.yml # JSON for jq docker compose config --format json | jq '.services.web.environment' ``` Useful for: • Scripting — loop over services • CI caching — detect config changes with hashes • Reproducible deploys — pin image digests • Debugging — see raw vs interpolated config Full guide: https://lnkd.in/etRJgAkC #Docker #DockerCompose #Debugging #DevOps #BestPractices
To view or add a comment, sign in
-
Saga felt like the final boss of microservices. Until it turned into… chaos. ❗ The Problem Our order flow looked simple on paper: A → B → C → ✅ Production reality: A → B → C → ❌ (The dreaded rollback loop) What actually happened: • If C failed, we had to “undo” A and B manually • Compensating logic became 80% of our code • One tiny bug → permanent data mismatch • Adding one service = multiple new failure paths 🧩 The Root Cause We stretched Saga beyond its limits. 5+ services whispering via events → no clear view of the system Business logic got buried under a mountain of error-handling 🛠️ The Fix We stopped chaining services blindly and moved to orchestration Before: A → B → C → D (choreography chaos) After: 🧠 Orchestrator ├── A ├── B ├── C └── D The impact: • One source of truth for the entire flow • Built-in retries (no custom retry loops) • Clear separation of concerns • Services focus on logic, not failure handling 📌 Key Learning • Saga works well for simple or well-bounded flows • If your “undo” code is bigger than your feature code, your architecture is telling you something ⚡ Microservices don’t fail because of scale. They fail because of unmanaged complexity. 💬 Are you still coding manual rollbacks… or letting an orchestrator handle it? 👇 #SystemDesign #Backend #Microservices #SoftwareArchitecture #Java #SpringBoot
To view or add a comment, sign in
-
-
Copilots are great at helping you write code. But most enterprise work doesn’t start with: “Write me a function.” It starts with: – “Upgrade this Java 8 service without breaking 40 dependencies” – “Fix every high-severity vuln across 300 repos” – “Add test coverage to a system no one understands” – “Migrate this to the cloud and make sure nothing regresses” That’s not autocomplete. That’s coordination. Context. Iteration. Validation. It’s: → reading entire codebases → understanding how systems interact → making changes across multiple files/services → running tests, fixing failures, repeating → integrating with CI/CD, scanners, infra That’s not a single prompt. It’s a workflow. Until AI can operate across that entire loop… it’s not really addressing where most engineering time goes. We’re still very early in how people think about this.
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development