🐳 Docker doesn't have to be confusing- here's the smart way to build Node.js images that actually work everywhere. You know that "works on my laptop, breaks on server" nightmare? This Dockerfile fixes it with pro tricks I've used in real deploys. What makes this gold: Starts super lean with node:20-alpine (way smaller than full Node images) Copies package*.json first—Docker caches your npm ci forever unless deps change. Game-changer for CI/CD speed! Runs as appuser (not root)—security teams love this, blocks container breakouts Only prod deps with --only=production. No dev bloat in your final image. Real talk: Most copy-paste Dockerfiles from StackOverflow. This one's optimized—cuts image size 70-80%, builds 3x faster. Perfect for your K8s journey. DevOps folks, what's your Docker hack? Still doing docker run roulette? 😅 "Quick Beginner Breakdown" Docker = same app environment everywhere. Here's what this nails: Image size: Alpine base + no dev deps = tiny containers Build speed: Layer caching (package.json first) = fast rebuilds Security: Non-root user = safer in production Portability: One Dockerfile, zero "my machine" excuses Try it: docker build -t my-node-app . Pro tip—push to registry, deploy to K8s, sleep easy. 🚀 #Docker #DevOps #NodeJS #Containers #CI_CD
Optimize Node.js Docker Images with Lean Alpine Base and Layer Caching
More Relevant Posts
-
Docker containers are isolated by default. So how do 3 services in the same app actually talk to each other? 🤔 When I Dockerized Pet Monitor, I had 3 services that all needed to communicate: → Frontend (React) calling the backend → Backend calling the notification microservice → Everything spinning up in the right order The answer? A custom Docker network. 🌐 In my docker-compose.yml I defined: networks: pet-monitor-network: driver: bridge And added every service to it. Here's what that actually gives you: 1️⃣ Services can find each other by name Inside Docker, instead of calling http://localhost:8081, the backend calls http://notification-service:8081. Docker handles the DNS automatically. No hardcoded IPs. No config headaches. 2️⃣ Port mapping controls what's exposed "8080:8080" means host:container. My browser hits localhost:8080 → Docker forwards it into the backend container. The other ports stay internal unless I expose them. 3️⃣ depends on controls startup order Frontend and notification-service both wait for the backend to be ready before starting. Because what's the point of a UI with no API behind it? One network. Three services. Zero confusion about who's talking to who. ✅ This is the kind of thing that makes Docker Compose so powerful, you're not just running containers, you're defining how an entire system is wired together. What part of Docker networking tripped you up the most? 👇 #Docker #DevOps #Microservices #SpringBoot #CSUN #LearningInPublic
To view or add a comment, sign in
-
-
My Docker image was 1.2GB. Then I learned this. Most fresh developers build Docker images the wrong way — and don't even know it. You're shipping your entire toolbox just to deliver one tool. I ran into this exact problem while containerizing my Flask app (QuickStay). The image was bloated, slow to push, and full of build tools that had zero business being in production. The fix? Multi-stage builds. Here's the difference — dead simple: Single-stage Dockerfile: → One FROM, one environment → Build tools + runtime all crammed together → Result: heavy image, larger attack surface, slower deployments Multi-stage Dockerfile: → Stage 1 (Builder): install dependencies, compile, build → Stage 2 (Runner): copy ONLY what you need to run → Result: lean image, fewer vulnerabilities, faster CI/CD After switching QuickStay to a multi-stage build, my image size dropped by over 60%. Same app. Cleaner container. Small change. Massive impact. If you're serious about DevOps, multi-stage builds aren't optional — they're the standard. Found this useful? Save it for your next Dockerfile. Drop your experience in comment box. Follow me for more DevOps content that actually comes from building real things. #mananurrehman #DevOps #Docker #Dockerfile #Containerization #MultiStageBuild #CI #CICD #SoftwareEngineering #Backend #CloudComputing #AWS #GitHub #OpenToWork #DevOpsEngineer #LearningInPublic
To view or add a comment, sign in
-
🚨 𝐌𝐲 𝐊𝐮𝐛𝐞𝐫𝐧𝐞𝐭𝐞𝐬 𝐩𝐨𝐝 𝐤𝐞𝐩𝐭 𝐫𝐞𝐬𝐭𝐚𝐫𝐭𝐢𝐧𝐠 𝐞𝐯𝐞𝐫𝐲 𝐟𝐞𝐰 𝐡𝐨𝐮𝐫𝐬… 𝐚𝐧𝐝 𝐈 𝐡𝐚𝐝 𝐧𝐨 𝐜𝐥𝐮𝐞 𝐰𝐡𝐲. No errors in the logs. No crash messages. Everything looked normal. Still… the pod kept disappearing. 𝐎𝐮𝐭 𝐨𝐟 𝐜𝐮𝐫𝐢𝐨𝐬𝐢𝐭𝐲, 𝐈 𝐫𝐚𝐧: kubectl describe pod <pod-name> And found the real reason: 💥 𝐎𝐎𝐌𝐊𝐢𝐥𝐥𝐞𝐝 (𝐄𝐱𝐢𝐭 𝐂𝐨𝐝𝐞 137) That’s when it hit me, the application wasn’t crashing… Kubernetes was killing it due to memory exhaustion. 𝐇𝐞𝐫𝐞’𝐬 𝐰𝐡𝐚𝐭 𝐈 𝐢𝐝𝐞𝐧𝐭𝐢𝐟𝐢𝐞𝐝 👇 1️⃣ 𝐍𝐨 𝐦𝐞𝐦𝐨𝐫𝐲 𝐥𝐢𝐦𝐢𝐭𝐬 𝐝𝐞𝐟𝐢𝐧𝐞𝐝 The pod was allowed to consume unlimited memory. Eventually, it exhausted the node’s memory and got terminated. 👉 𝐅𝐢𝐱: 𝐀𝐥𝐰𝐚𝐲𝐬 𝐝𝐞𝐟𝐢𝐧𝐞 𝐫𝐞𝐬𝐨𝐮𝐫𝐜𝐞 𝐫𝐞𝐪𝐮𝐞𝐬𝐭𝐬 𝐚𝐧𝐝 𝐥𝐢𝐦𝐢𝐭𝐬 𝘳𝘦𝘴𝘰𝘶𝘳𝘤𝘦𝘴: 𝘳𝘦𝘲𝘶𝘦𝘴𝘵𝘴: 𝘮𝘦𝘮𝘰𝘳𝘺: "256𝘔𝘪" 𝘭𝘪𝘮𝘪𝘵𝘴: 𝘮𝘦𝘮𝘰𝘳𝘺: "512𝘔𝘪" 2️⃣ 𝐉𝐕𝐌 𝐰𝐚𝐬 𝐧𝐨𝐭 𝐜𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫-𝐚𝐰𝐚𝐫𝐞 The Java application calculated heap size based on the node’s total memory, not the container limit. 👉 𝐅𝐢𝐱: 𝐓𝐮𝐧𝐞 𝐉𝐕𝐌 𝐟𝐨𝐫 𝐜𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫 𝐞𝐧𝐯𝐢𝐫𝐨𝐧𝐦𝐞𝐧𝐭𝐬 -𝘟𝘟:+𝘜𝘴𝘦𝘊𝘰𝘯𝘵𝘢𝘪𝘯𝘦𝘳𝘚𝘶𝘱𝘱𝘰𝘳𝘵 -𝘟𝘟:𝘔𝘢𝘹𝘙𝘈𝘔𝘗𝘦𝘳𝘤𝘦𝘯𝘵𝘢𝘨𝘦=75.0 3️⃣ 𝐌𝐞𝐦𝐨𝐫𝐲 𝐥𝐞𝐚𝐤 𝐢𝐧 𝐭𝐡𝐞 𝐚𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧 Even after setting limits, memory usage kept increasing over time. Root cause: A background process was holding objects and not releasing them. 👉 Fix: Monitor memory trends using Prometheus and Grafana If memory steadily increases and doesn’t drop, it’s likely a memory leak. 💡 𝑲𝒆𝒚 𝒕𝒂𝒌𝒆𝒂𝒘𝒂𝒚𝒔: • Always define memory requests and limits • Make your application container-aware • Monitor trends, not just logs • OOMKilled = container terminated by the system, not an app crash This is one of the most common (and confusing) issues in Kubernetes. Have you faced something similar? 𝑾𝒐𝒖𝒍𝒅 𝒍𝒐𝒗𝒆 𝒕𝒐 𝒉𝒆𝒂𝒓 𝒉𝒐𝒘 𝒚𝒐𝒖 𝒅𝒆𝒃𝒖𝒈𝒈𝒆𝒅 𝒊𝒕 👇 #Kubernetes #DevOps #K8s #CloudNative #SRE #PlatformEngineering
To view or add a comment, sign in
-
-
Stop mixing up Docker Images and Containers. 🛑 I used to get these two confused all the time when I started. People use the terms interchangeably, but in the world of DevOps, that’s a quick way to cause a headache. The easiest way to wrap your head around it? The Cake Analogy. 🍰 1. The Image is your Recipe. It’s just a file. It’s a blueprint. It has your code, your OS, and your libraries sitting there quietly. You can't "run" a recipe, but you need it to build anything. 2. The Container is the Cake. When you actually run that Image, you get a Container. This is the "living" version of your app. Here’s why this matters for us in DevOps: Once you have one solid "Recipe" (Image), you can bake 10, 50, or 100 identical "Cakes" (Containers) across any server in the world. They will all taste exactly the same. No more "but it worked on my machine" because the recipe never changes. If you’re just starting with Docker, which one did you find harder to grasp—the concept of the image or the runtime container? #DevOps #Docker #TechSimplified #LearningInPublic #CloudNative #DevOpsEngineer #DockerSeries
To view or add a comment, sign in
-
-
Stop getting stuck with "stale" code in Kubernetes! 🐳 ⛴️ One of the most common "why isn't my code updating?" bugs in K8s comes down to a simple setting: imagePullPolicy: IfNotPresent. If you're using mutable tags (like :latest or :dev), here’s what happens: - You push a new image to the registry. - You restart your Pod. - Kubernetes sees the tag already exists on the node. - It skips the pull and runs your old code. 🤦♂️ Here is the quick fix guide: ✅ Use imagePullPolicy: Always for development. It doesn't actually download the whole image every time—it just checks the registry for a new digest. If nothing changed, it uses the cache. ✅ Use Immutable Digests in Production. Instead of my-app:v1, use my-app@sha256:[hash]. This ensures every single node is running the exact same bits, regardless of the pull policy. ✅ Use Versioned Tags. Avoid :latest. Use unique tags like :v1.0.1 or the Git commit hash. When the tag changes, IfNotPresent works perfectly because the new tag won't be on the node yet. Don't let a cached image trick you into thinking your bug fix didn't work! #Kubernetes #DevOps #CloudNative #Docker #SoftwareEngineering #K8sTips
To view or add a comment, sign in
-
-
🏗️ Dockerfile Anatomy: The 60-Second Guide Think of a Dockerfile as a recipe. It’s a simple text file that tells Docker how to package your app into a portable container. Here are the essential "Ingredients": FROM: The Base. (e.g., node:18) Starts your environment. WORKDIR: The Kitchen. Sets the folder where everything happens. COPY: The Delivery. Moves your code from your PC into the image. RUN: The Prep. Installs your libraries (e.g., npm install). EXPOSE: The Window. Tells Docker which port to watch. CMD: The "Go!" Button. The command that starts your app. 💡 Why does this matter? Immutability: It works the same on my machine and your server. Layers: Docker caches each step, making builds lightning-fast. Automation: No more "How do I set this up?" manuals. 🐳 The Dockerfile Evolution Most developers start with a "Normal" image. It’s easy, but it’s heavy. By using Multi-Stage Builds, you can separate your build environment from your runtime environment. 🏗️ The "Normal" Approach (The Heavyweight) What it includes: SDKs, compilers, build caches, and source code. The Result: A bloated image (e.g., 800MB+) with a larger attack surface. ⚡ The "Multi-Stage" Approach (The Lightweight) What it includes: Only the compiled binary and the bare-minimum runtime. The Result: A slim, production-ready image (e.g., 50MB) that’s faster to pull and more secure. 💻 Pro-Tip: The Dockerfile Secret Sauce Check out this structure: Stage 1 (Build): Use a heavy image like golang:1.21 to compile your app. Stage 2 (Final): Copy only the compiled artifact into a tiny image like alpine or scratch. Stop shipping your compilers to production! Your infrastructure (and your security team) will thank you. 🛠️ #Docker #DevOps #CloudNative #SoftwareEngineering #Containerization #Microservices #Efficiency
To view or add a comment, sign in
-
-
Built and published my SimpleTimeService – End-to-End DevOps Challenge 🚀 This project is more than a simple web app — it’s a complete DevOps workflow built to simulate production-style delivery. What’s included: 🔹 Minimal Python web application (FastAPI) 🔹 Secure Docker containerization (non-root, read-only filesystem) 🔹 Kubernetes deployment with probes, limits, and service exposure 🔹 Infrastructure provisioning with Terraform (AWS VPC + EKS) 🔹 CI/CD automation using GitHub Actions 🔹 Security scanning using Trivy 🔹 Automated Kubernetes manifest updates with immutable image tags Tech stack used: 🐳 Docker ☸️ Kubernetes (EKS) 🏗 Terraform ⚙️ GitHub Actions ☁️ AWS 🐍 FastAPI Pipeline flow: Code Push → Lint → Build → Test → Security Scan → Push Image → Update Manifest → Deploy Production practices implemented: ✅ Non-root container execution ✅ Read-only filesystem ✅ Resource requests & limits ✅ Liveness & readiness probes ✅ Vulnerability scanning ✅ Immutable image versioning ✅ Infrastructure as Code GitHub repo live now 💻 https://lnkd.in/gANtZR8V #DevOps #AWS #Kubernetes #Terraform #Docker #GitHubActions #EKS #CloudNative #PlatformEngineering #DevSecOps
To view or add a comment, sign in
-
-
Earlier today, we had a pre-demo session with Samkeliso Dube and co-mentor Onuche Paul, where Samkeliso Dube walked us through a clean and production-focused implementation of a Multi-Stage Docker Build for a React App (Builder → Nginx Runtime). Here’s a quick look at what we built and validated: Topic: Multi-Stage Docker Build for React Application Focus: Optimizing container images for production This wasn’t just theory. We worked through a practical, real-world containerization workflow: Built a single-stage Docker image as a baseline Designed a multi-stage Dockerfile: Used Node.js as the builder stage Used NGINX as a lightweight runtime Served the React app as static files in a production-ready container Verified application access via browser (:3000 / :80) Compared image sizes and observed clear reduction in footprint. More importantly, we broke down the why behind this approach: Eliminating build tools from the final image reduces attack surface. Smaller images improve deployment speed in CI/CD pipelines. Layer caching strategy improves build efficiency. Using Docker properly aligns with production best practices. Thanks to Pravin Mishra and our lead mentor Praveen Pandey for the continuous guidance and support. If you're working in DevOps, cloud engineering, or modern application deployment, this is a foundational pattern you need to master. Let’s build. Let’s ship. Let’s optimize. #DevOps #Docker #NGINX #CloudEngineering #CI_CD #ReactJS #Containerization #TechLearning #DMI
To view or add a comment, sign in
-
-
I used to think Docker was just about getting an app to run anywhere. If it builds and starts, job done… right? Not really. Once you start working in real production environments, you realize small Docker habits can make a huge difference in security, performance, and reliability. Here are some of the practices I’ve picked up (with simple examples): First — stop using `latest`. It feels convenient, but it can break things without warning. Instead of: FROM node:latest Do this: FROM node:18.17.1-alpine Now your builds are predictable and consistent. Second — always prefer official and minimal images. Smaller images = faster deployments + fewer vulnerabilities. Example: FROM python:3.11-alpine Third — order your Dockerfile to use caching properly. Put things that change less at the top. Bad: COPY . . RUN npm install Better: COPY package.json package-lock.json ./ RUN npm install COPY . . Now dependencies don’t reinstall every time you change a file. Fourth — use multi-stage builds. Don’t ship your entire development environment to production. Example: # Build stage FROM node:18-alpine AS builder WORKDIR /app COPY . . RUN npm install && npm run build # Production stage FROM node:18-alpine WORKDIR /app COPY --from=builder /app/dist ./dist CMD ["node", "dist/index.js"] Fifth — don’t run containers as root. It’s risky and unnecessary. Example: RUN addgroup -S appgroup && adduser -S appuser -G appgroup USER appuser Sixth — scan your images for vulnerabilities. You don’t want hidden issues going to production. Example: snyk test Seventh — keep your images clean. Avoid unnecessary files, caches, and tools. Example: RUN npm ci --only=production These aren’t “advanced tricks.” They’re small habits. But together, they make your containers faster, safer, and much easier to manage in the long run. What’s one Docker practice you follow that others often ignore? #Docker #DevOps #CloudComputing #Backend #SoftwareEngineering
To view or add a comment, sign in
-
Every line in a Dockerfile is a deliberate decision. Most people write them without knowing why. A Dockerfile is not a shell script. It is a set of immutable, cached, layered instructions that build a reproducible image. Understanding the difference changes how you write them. Let me walk through the decisions that matter most. FROM node:14 This is not just "I need Node." It is your entire foundation. The base image determines what OS, what shell, what system libraries your container inherits. Choose it deliberately. ENV NODE_ENV=production Bake configuration into the image at build time so the container needs no external setup at runtime. This is the opposite of configuration drift. WORKDIR /usr/src/app Every subsequent instruction resolves paths relative to this. It keeps your container organized and your COPY commands predictable. Here is the most important ordering insight most developers miss: COPY package*.json ./ RUN npm install --production COPY . . Why copy package.json first, install, then copy the rest of the code? Because of Docker's layer cache. 🧠 Docker caches each instruction as a layer. If a layer's inputs have not changed, it reuses the cache and skips execution. Dependencies (package.json) change rarely. Code changes constantly. By copying them separately, you ensure that npm install only reruns when your dependencies actually change. Swap the order and you reinstall node_modules on every single code change. On a large project, that is minutes wasted per build. HEALTHCHECK CMD curl -fs http://localhost:$PORT || exit 1 This is not for your benefit. It is for Kubernetes. Orchestrators use health checks to decide whether to route traffic to a container. A container that starts but serves errors is worse than one that never starts. USER node Drop root privileges before the process starts. A container running as root with a vulnerability can escape to the host. This line costs nothing. Skipping it costs potentially everything. The Dockerfile is not boilerplate. Every line is architecture. What is the most counterintuitive Dockerfile practice you have come across? #Docker #Dockerfile #DevOps #SoftwareEngineering #Containers #BackendDevelopment #CloudNative #ContinuousDelivery #Security
To view or add a comment, sign in
-
More from this author
-
Claude Can Now Handle SEO Like a $10,000/Month Premium Agency. And It Won't Cost You a Single Rupee.
Kiran Kumar V 2d -
The Broken Bridge: Why CWV Fixes Fail Before They Start
Kiran Kumar V 5d -
Singapore Job Market Crisis: Sharpest Drop in Postings in 5 Years – MOM Q4 Report & Indeed Analysis
Kiran Kumar V 2w
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development