Docker containers are isolated by default. So how do 3 services in the same app actually talk to each other? 🤔 When I Dockerized Pet Monitor, I had 3 services that all needed to communicate: → Frontend (React) calling the backend → Backend calling the notification microservice → Everything spinning up in the right order The answer? A custom Docker network. 🌐 In my docker-compose.yml I defined: networks: pet-monitor-network: driver: bridge And added every service to it. Here's what that actually gives you: 1️⃣ Services can find each other by name Inside Docker, instead of calling http://localhost:8081, the backend calls http://notification-service:8081. Docker handles the DNS automatically. No hardcoded IPs. No config headaches. 2️⃣ Port mapping controls what's exposed "8080:8080" means host:container. My browser hits localhost:8080 → Docker forwards it into the backend container. The other ports stay internal unless I expose them. 3️⃣ depends on controls startup order Frontend and notification-service both wait for the backend to be ready before starting. Because what's the point of a UI with no API behind it? One network. Three services. Zero confusion about who's talking to who. ✅ This is the kind of thing that makes Docker Compose so powerful, you're not just running containers, you're defining how an entire system is wired together. What part of Docker networking tripped you up the most? 👇 #Docker #DevOps #Microservices #SpringBoot #CSUN #LearningInPublic
Docker Networking for Microservices with Docker Compose
More Relevant Posts
-
🐳 Docker doesn't have to be confusing- here's the smart way to build Node.js images that actually work everywhere. You know that "works on my laptop, breaks on server" nightmare? This Dockerfile fixes it with pro tricks I've used in real deploys. What makes this gold: Starts super lean with node:20-alpine (way smaller than full Node images) Copies package*.json first—Docker caches your npm ci forever unless deps change. Game-changer for CI/CD speed! Runs as appuser (not root)—security teams love this, blocks container breakouts Only prod deps with --only=production. No dev bloat in your final image. Real talk: Most copy-paste Dockerfiles from StackOverflow. This one's optimized—cuts image size 70-80%, builds 3x faster. Perfect for your K8s journey. DevOps folks, what's your Docker hack? Still doing docker run roulette? 😅 "Quick Beginner Breakdown" Docker = same app environment everywhere. Here's what this nails: Image size: Alpine base + no dev deps = tiny containers Build speed: Layer caching (package.json first) = fast rebuilds Security: Non-root user = safer in production Portability: One Dockerfile, zero "my machine" excuses Try it: docker build -t my-node-app . Pro tip—push to registry, deploy to K8s, sleep easy. 🚀 #Docker #DevOps #NodeJS #Containers #CI_CD
To view or add a comment, sign in
-
-
Earlier today, we had a pre-demo session with Samkeliso Dube and co-mentor Onuche Paul, where Samkeliso Dube walked us through a clean and production-focused implementation of a Multi-Stage Docker Build for a React App (Builder → Nginx Runtime). Here’s a quick look at what we built and validated: Topic: Multi-Stage Docker Build for React Application Focus: Optimizing container images for production This wasn’t just theory. We worked through a practical, real-world containerization workflow: Built a single-stage Docker image as a baseline Designed a multi-stage Dockerfile: Used Node.js as the builder stage Used NGINX as a lightweight runtime Served the React app as static files in a production-ready container Verified application access via browser (:3000 / :80) Compared image sizes and observed clear reduction in footprint. More importantly, we broke down the why behind this approach: Eliminating build tools from the final image reduces attack surface. Smaller images improve deployment speed in CI/CD pipelines. Layer caching strategy improves build efficiency. Using Docker properly aligns with production best practices. Thanks to Pravin Mishra and our lead mentor Praveen Pandey for the continuous guidance and support. If you're working in DevOps, cloud engineering, or modern application deployment, this is a foundational pattern you need to master. Let’s build. Let’s ship. Let’s optimize. #DevOps #Docker #NGINX #CloudEngineering #CI_CD #ReactJS #Containerization #TechLearning #DMI
To view or add a comment, sign in
-
-
Stop mixing up Docker Images and Containers. 🛑 I used to get these two confused all the time when I started. People use the terms interchangeably, but in the world of DevOps, that’s a quick way to cause a headache. The easiest way to wrap your head around it? The Cake Analogy. 🍰 1. The Image is your Recipe. It’s just a file. It’s a blueprint. It has your code, your OS, and your libraries sitting there quietly. You can't "run" a recipe, but you need it to build anything. 2. The Container is the Cake. When you actually run that Image, you get a Container. This is the "living" version of your app. Here’s why this matters for us in DevOps: Once you have one solid "Recipe" (Image), you can bake 10, 50, or 100 identical "Cakes" (Containers) across any server in the world. They will all taste exactly the same. No more "but it worked on my machine" because the recipe never changes. If you’re just starting with Docker, which one did you find harder to grasp—the concept of the image or the runtime container? #DevOps #Docker #TechSimplified #LearningInPublic #CloudNative #DevOpsEngineer #DockerSeries
To view or add a comment, sign in
-
-
I broke my app 3 times while moving it from localhost:3000 to Kubernetes… and each failure taught me something critical about production systems. Going from a local development setup to a fully containerized, production-ready AKS (Azure Kubernetes Service) architecture is rarely a straight line. I just wrapped up the Kubernetes deployment for SkyPredict (my ML-powered flight delay prediction application), and it turned into a real masterclass in cloud-native problem solving. Building the ML models was the fun part—but engineering the infrastructure to serve them securely and reliably is where things got real. Here are the major architectural challenges I solved: Next.js Runtime Puzzle Next.js statically embeds environment variables at build time, but Kubernetes requires runtime injection for secrets and configs. Fix: Built a custom entrypoint.sh for the frontend Docker container. It injects Kubernetes Secrets at runtime into a global window object before React hydrates—bridging the build vs. runtime gap. Closing the Backend Doors Exposing backend services publicly is a major security risk, especially when serving ML models. Fix: Moved the FastAPI backend (serving Classifier + Regressor models) to internal ClusterIP services, making it completely inaccessible from outside the cluster. Only the frontend communicates with it internally. Unified Routing & HTTPS A single domain was needed to serve both frontend and APIs securely. Fix: Deployed an NGINX Ingress Controller with path-based routing (/ and /api). Combined with cert-manager, enabling auto-renewing Let’s Encrypt SSL certificates with zero manual overhead. It took a lot of YAML debugging, deployment retries, and “why is this pod not starting” moments—but seeing everything finally communicate flawlessly in production made it worth it. Next up: diving deep into how I set up Prometheus and Grafana to monitor all of this infrastructure. Take a look at the system architecture diagram below 👇 Would love feedback or to hear how others have solved similar Kubernetes challenges. #Kubernetes #DevOps #SystemArchitecture #Nextjs #FastAPI #CloudNative #Azure #AKS #MachineLearning
To view or add a comment, sign in
-
-
Your Docker container works perfectly locally. You push it to Kubernetes. It breaks. No crash. No clear error. Just wrong behavior, CrashLoopBackOff, or a service that returns garbage in production. Every time, the root cause is the same — assumptions the container was making that Docker satisfied quietly, and Kubernetes simply doesn't. Docker locally is a generous host. Kubernetes in production is a strict one. 5 gaps that burn you most: 1. Environment variables that exist on your laptop but not in the cluster Your app inherits from your shell, .env, docker-compose. In Kubernetes, the pod gets exactly what's in the manifest. AWS SDK calls fail silently. DB connections refuse. Pod is Running — but broken. 2. Resource limits causing OOMKill with zero logs Pod runs for 2 minutes then disappears. No logs — process was killed before it could write one. kubectl describe shows OOMKilled — but only if you check before the pod restarts. Miss the window and you're debugging a ghost. 3. localhost networking that works in Docker Compose, breaks in Kubernetes In Kubernetes, localhost is the pod itself — not other services. An app hitting localhost:5432 fails immediately. The error looks like a DB problem, not a networking one. 4. Liveness probes killing healthy pods before they finish starting Probe fires at second 10. App needs 25 seconds to init. Kubernetes marks it unhealthy, kills it. CrashLoopBackOff. The app never had a chance. 5. File permissions root ignores locally but non-root can't write in prod Managed clusters enforce runAsNonRoot. Your container writes to /app/logs as root. Permission denied — buried in logs that look like an app crash. The fix isn't making Kubernetes more permissive. It's making your container honest about what it needs. Full breakdown with real code examples for EKS and GKE on Dev.to. Link in the comments 👇 What's the most confusing Docker-to-Kubernetes failure you've hit? Drop it below. #Kubernetes #Docker #DevOps #EKS #GKE #Containers #SRE #CloudComputing #CareerOpportunities #DevOpsJobs
To view or add a comment, sign in
-
Most beginners think Docker just "runs code in a box." But the deeper you go, the more you realize how surgical the isolation actually is. Here's something that surprised me early on: Spin up 2 containers from the exact same image. One creates a file. The other can't see it at all. Same image. Completely separate worlds. That's not a setting you enable. That's the default. Each container gets its own isolated: Filesystem changes stay inside, forever Process space no cross container visibility Network stack separate interfaces by default Why does this matter in the real world? → Run 10 different app versions on one machine no conflicts → Reproduce bugs in a clean env every single time → Kill a container, spin a new one zero side effects This is the foundation of modern microservices. Not Kubernetes. Not CI/CD pipelines. Just: one container = one isolated world. Have you ever faced a "Conflict" issue that Docker solved for you? Let's talk in the comments! 👇 #Docker #DevOps #Backend #SoftwareEngineering #LearningInPublic #CloudComputing #DotNet #SystemDesign
To view or add a comment, sign in
-
-
𝗘𝘃𝗲𝗿𝘆 𝗗𝗲𝘃𝗢𝗽𝘀 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿 𝗵𝗮𝘀 𝘀𝗲𝗲𝗻 𝘁𝗵𝗶𝘀. ``` STATUS: CrashLoopBackOff ``` And the first reaction is always the same — 🔄 Restart the pod 🔄 Restart again 🔄 Delete and recreate 🔄 Still crashing Sound familiar? Here's what's actually happening and how to fix it in under 5 minutes 👇 What CrashLoopBackOff actually means: Your container starts → crashes → Kubernetes restarts it → crashes again. Kubernetes keeps retrying with increasing delays. It's not a Kubernetes bug. Your application is broken at startup. Step 1 — Read the logs BEFORE it dies: ``` kubectl logs <pod-name> -n <namespace> --previous ``` The `--previous` flag is the key most people miss. It shows logs from the last crashed container — not the current one. **Step 2 — Describe the pod:** kubectl describe pod <pod-name> -n <namespace> Check the Events section at the bottom That's where Kubernetes tells you exactly what went wrong. The Most Common Root Causes: ❌ Wrong environment variable — app can't connect to DB at startup ❌ Missing secret or ConfigMap — app throws error and exits ❌ Wrong image tag — pulled a broken build ❌ Application port mismatch — liveness probe failing immediately ❌ Insufficient permissions — app can't read mounted volume The Fix Checklist: ✅ Verify all env variables and secrets are correctly mounted ✅ Cross check your image tag — never use `latest` in production ✅ Match your liveness probe port with your actual app port ✅ Check ConfigMap keys match what your app expects The Real Lesson: CrashLoopBackOff is never a Kubernetes problem. Kubernetes is just the messenger. Always read the message before blaming the platform. #Kubernetes #DevOps #K8s #EKS #AWS #SRE #PlatformEngineering #CloudEngineering #DevOpsEngineer #Containers
To view or add a comment, sign in
-
DevOps Journey Started. Step 1: Containerization with Docker Link:- • GitHub https://lnkd.in/gwfXsp_3 • App https://lnkd.in/d5JnhrGz • Containerized frontend & backend using Dockerfiles • Optimized images using .dockerignore, multi-stage builds, and layer caching • Structured project with a clean multi-service setup via docker-compose • Enabled service-to-service communication using Docker networking • Used volumes for persistence and improved development workflow • Managed environment variables using .env (development & production) Next step: CI/CD with GitHub Actions. #docker #development #fullstack #devops
To view or add a comment, sign in
-
-
Stop saying “I know Docker” If you don’t understand the difference between Dockerfile and Docker Compose. Here’s the reality 👇 Most developers learn Docker by building a single container… But real-world applications are NEVER just one container. You don’t deploy an app alone. You deploy a system. 🔹 A backend 🔹 A database 🔹 Maybe a frontend 🔹 Sometimes a cache (Redis) 🔹 And networking between all of them This is where people get it wrong: ➡️ Dockerfile = how you BUILD your app It’s the blueprint. The recipe. The image. ➡️ Docker Compose = how you RUN your system It orchestrates multiple containers, connects them, and makes them work together. Think of it like this: Dockerfile = cooking a dish 🍳 Docker Compose = organizing the whole restaurant 🍽️ If you only know Dockerfile, you’re still thinking small. If you master Docker Compose, you start thinking in systems. And that’s the difference between a developer… and a DevOps mindset. 💬 So tell me: Are you still building containers… or orchestrating systems? #Docker #DevOps #CloudComputing #Microservices #Kubernetes #SoftwareEngineering
To view or add a comment, sign in
-
-
🚨 𝐌𝐲 𝐊𝐮𝐛𝐞𝐫𝐧𝐞𝐭𝐞𝐬 𝐩𝐨𝐝 𝐤𝐞𝐩𝐭 𝐫𝐞𝐬𝐭𝐚𝐫𝐭𝐢𝐧𝐠 𝐞𝐯𝐞𝐫𝐲 𝐟𝐞𝐰 𝐡𝐨𝐮𝐫𝐬… 𝐚𝐧𝐝 𝐈 𝐡𝐚𝐝 𝐧𝐨 𝐜𝐥𝐮𝐞 𝐰𝐡𝐲. No errors in the logs. No crash messages. Everything looked normal. Still… the pod kept disappearing. 𝐎𝐮𝐭 𝐨𝐟 𝐜𝐮𝐫𝐢𝐨𝐬𝐢𝐭𝐲, 𝐈 𝐫𝐚𝐧: kubectl describe pod <pod-name> And found the real reason: 💥 𝐎𝐎𝐌𝐊𝐢𝐥𝐥𝐞𝐝 (𝐄𝐱𝐢𝐭 𝐂𝐨𝐝𝐞 137) That’s when it hit me, the application wasn’t crashing… Kubernetes was killing it due to memory exhaustion. 𝐇𝐞𝐫𝐞’𝐬 𝐰𝐡𝐚𝐭 𝐈 𝐢𝐝𝐞𝐧𝐭𝐢𝐟𝐢𝐞𝐝 👇 1️⃣ 𝐍𝐨 𝐦𝐞𝐦𝐨𝐫𝐲 𝐥𝐢𝐦𝐢𝐭𝐬 𝐝𝐞𝐟𝐢𝐧𝐞𝐝 The pod was allowed to consume unlimited memory. Eventually, it exhausted the node’s memory and got terminated. 👉 𝐅𝐢𝐱: 𝐀𝐥𝐰𝐚𝐲𝐬 𝐝𝐞𝐟𝐢𝐧𝐞 𝐫𝐞𝐬𝐨𝐮𝐫𝐜𝐞 𝐫𝐞𝐪𝐮𝐞𝐬𝐭𝐬 𝐚𝐧𝐝 𝐥𝐢𝐦𝐢𝐭𝐬 𝘳𝘦𝘴𝘰𝘶𝘳𝘤𝘦𝘴: 𝘳𝘦𝘲𝘶𝘦𝘴𝘵𝘴: 𝘮𝘦𝘮𝘰𝘳𝘺: "256𝘔𝘪" 𝘭𝘪𝘮𝘪𝘵𝘴: 𝘮𝘦𝘮𝘰𝘳𝘺: "512𝘔𝘪" 2️⃣ 𝐉𝐕𝐌 𝐰𝐚𝐬 𝐧𝐨𝐭 𝐜𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫-𝐚𝐰𝐚𝐫𝐞 The Java application calculated heap size based on the node’s total memory, not the container limit. 👉 𝐅𝐢𝐱: 𝐓𝐮𝐧𝐞 𝐉𝐕𝐌 𝐟𝐨𝐫 𝐜𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫 𝐞𝐧𝐯𝐢𝐫𝐨𝐧𝐦𝐞𝐧𝐭𝐬 -𝘟𝘟:+𝘜𝘴𝘦𝘊𝘰𝘯𝘵𝘢𝘪𝘯𝘦𝘳𝘚𝘶𝘱𝘱𝘰𝘳𝘵 -𝘟𝘟:𝘔𝘢𝘹𝘙𝘈𝘔𝘗𝘦𝘳𝘤𝘦𝘯𝘵𝘢𝘨𝘦=75.0 3️⃣ 𝐌𝐞𝐦𝐨𝐫𝐲 𝐥𝐞𝐚𝐤 𝐢𝐧 𝐭𝐡𝐞 𝐚𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧 Even after setting limits, memory usage kept increasing over time. Root cause: A background process was holding objects and not releasing them. 👉 Fix: Monitor memory trends using Prometheus and Grafana If memory steadily increases and doesn’t drop, it’s likely a memory leak. 💡 𝑲𝒆𝒚 𝒕𝒂𝒌𝒆𝒂𝒘𝒂𝒚𝒔: • Always define memory requests and limits • Make your application container-aware • Monitor trends, not just logs • OOMKilled = container terminated by the system, not an app crash This is one of the most common (and confusing) issues in Kubernetes. Have you faced something similar? 𝑾𝒐𝒖𝒍𝒅 𝒍𝒐𝒗𝒆 𝒕𝒐 𝒉𝒆𝒂𝒓 𝒉𝒐𝒘 𝒚𝒐𝒖 𝒅𝒆𝒃𝒖𝒈𝒈𝒆𝒅 𝒊𝒕 👇 #Kubernetes #DevOps #K8s #CloudNative #SRE #PlatformEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
All your services are on the same Docker network, so the frontend can call the backend directly. Is there a specific reason you exposed the backend port to the host, or was it just for testing?