🐳 Most Docker mistakes are invisible - until production breaks. Here are the Docker best practices I wish someone had told me earlier. Save this before your next deployment. 👇 ━━━━━━━━━━━━━━━━━━━━ 📦 1. STOP pulling bloated base images The #1 mistake: pulling a full OS image just to run a script. ❌ Bad - 1.2 GB image FROM python:3.11 # Full Debian OS + Python ✅ Good - under 60 MB FROM python:3.11-slim # Minimal Debian, no extras FROM python:3.11-alpine # Musl-based, ~22 MB Smaller image = faster pulls, faster CI, smaller attack surface. Always start slim. 🏎️ ━━━━━━━━━━━━━━━━━━━━ 🔐 2. NEVER bake secrets into your image ❌ Dangerous ENV DB_PASSWORD=supersecret123 COPY .env /app/.env ✅ Safe - inject at runtime services: app: env_file: .env environment: - DB_PASSWORD=${DB_PASSWORD} Your .env should always be in .dockerignore AND .gitignore. 🔒 ━━━━━━━━━━━━━━━━━━━━ 🏗️ 3. Multi-stage builds - ship only what you need FROM python:3.11-slim AS builder WORKDIR /app COPY requirements.txt . RUN pip install --prefix=/install -r requirements.txt FROM python:3.11-slim COPY --from=builder /install /usr/local COPY . . CMD ["python", "main.py"] Build tools never reach production. 🎯 ━━━━━━━━━━━━━━━━━━━━ ⚡ 4. Layer ordering - cache is your best friend ❌ Cache-busting every build COPY . . RUN pip install -r requirements.txt ✅ Dependencies cached separately COPY requirements.txt . RUN pip install -r requirements.txt COPY . . ━━━━━━━━━━━━━━━━━━━━ 🛡️ 5. NEVER run as root in production RUN adduser --disabled-password --gecos "" appuser USER appuser CMD ["python", "main.py"] ━━━━━━━━━━━━━━━━━━━━ 🧹 6. USE .dockerignore - always .git .env __pycache__ *.pyc node_modules ━━━━━━━━━━━━━━━━━━━━ 🎯 Quick wins checklist: ✅ Use slim or alpine base images ✅ Inject secrets at runtime, never bake in ✅ Multi-stage builds for heavy apps ✅ Layer ordering = cache hits = fast CI ✅ Non-root user in production ✅ Always maintain .dockerignore 💡 One rule: if you wouldn't commit it to GitHub, don't let it touch your Docker image. What's the Docker mistake you see most often? Drop it below 👇 #Docker #DevOps #Backend #Python #SoftwareEngineering #Containers #CloudNative
Docker Best Practices to Avoid Common Mistakes
More Relevant Posts
-
Docker isn't just for DevOps and Platform Engineers. Every Python developer should know how to properly containerize their own code. 🐳 I've noticed that while many jump straight into Kubernetes or complex CI/CD pipelines, the everyday fundamentals of Docker are often misunderstood. What exactly is the difference between an Image and a Container? How does port mapping work? Why did the container exit immediately? I've put together a 1-page "Docker Developer Essentials" cheat sheet. It cuts out the noise and focuses purely on what a Software Engineer needs to know on a daily basis. 👇 Here's a quick look at what's covered: ✅ The 4 Primitives: The breakdown between Dockerfile, Image, Container, and Registry. 📂 Anatomy of a Dockerfile: We break down a perfect Python Dockerfile line-by-line, explaining why we copy `requirements.txt` before `COPY . .` (hint: caching!). ⚡ Essential CLI: The 6 commands you actually need (`build`, `run`, `ps`, `stop`, `logs`, `exec`). 💾 Data Persistence: The core difference between Named Volumes (for your database) vs Bind Mounts (for hot-reloading your code). 🚢 Docker Compose: A practical multi-container `docker-compose.yml` snippet combining an API and a Postgres DB. 🛑 Common Pitfalls & Q&A: Quick fixes for daemon connection issues, port allocations, and whether you really need to EXPOSE ports or use `.dockerignore`. Containers are meant to be ephemeral (disposable). If you are SSHing into your container to install updates, you need this cheat sheet! 🚀 #Docker #Python #SoftwareEngineering #BackendDev #Programming #DevOps #Containers #Coding
To view or add a comment, sign in
-
-
🚀 Implemented a basic CI pipeline using GitHub Actions. On every push, the workflow: Triggers via event-based execution Runs on ubuntu-latest runner Uses a matrix strategy to test across Python 3.8 & 3.9 Checks out repo using actions/checkout@v3 Sets up Python via actions/setup-python@v2 Installs dependencies (pip, pytest) Executes tests using python -m pytest 📁 .github/workflows/first-actions.yaml 🎯 Ensures consistent builds, multi-version compatibility, and automated test validation on every commit (CI). 🔗 Repo: https://lnkd.in/gDpgTwG2 📝 Article: https://lnkd.in/gZwinuZ4 #GitHubActions #CI #DevOps #Python #Automation
To view or add a comment, sign in
-
Nothing teaches you OpenShift better than a broken tutorial... I was recently going through the "Foundations of OpenShift" tutorial by Red Hat, expecting a smooth "Import from Git" experience. Instead, I got a CrashLoopBackOff and a bunch of cryptic logs. It turns out the sample code was quite outdated. It relied on a deprecated tool called powershift-cli that just doesn't work with modern Python S2I images anymore. What I did to fix it: Dug into the logs: Found that the container was trying to run a command that no longer exists (powershift image). Forked the repo: Rewrote the run script to use a standard Django startup. Fixed the "hidden" bugs: Found some syntax errors in the background scheduler and added automatic migrations (no more 500 errors!). Handled the DB: Realized why the blog was empty (ephemeral SQLite storage) and documented it for anyone else trying this tutorial. It took a bit of debugging, but I honestly learned more about OpenShift's build process and S2I than I would have if the "Start" button had just worked. If you're stuck on Tutorial 4, feel free to use my updated repo: https://lnkd.in/dNfjEnGS #OpenShift #DevOps #Django #LearningByDoing #Kubernetes
To view or add a comment, sign in
-
I recently built a Python-based tool that scans public GitHub repositories to analyze Dockerfile sources and extract base images used across projects. 🔍 What it does: • Parses multiple repositories from a given input source • Locates all Dockerfiles within each repo • Extracts image names from FROM statements • Aggregates everything into a structured JSON output 💡 Why I built this: I wanted to explore how container security and compliance can be improved by tracking trusted base images. This project helped me dive deeper into real-world challenges around scalability, fault tolerance, and clean code design. ⚙️ Tech highlights: • Python for core logic • GitHub repo parsing & file traversal • JSON data structuring • Focus on production-grade practices (error handling, extensibility, maintainability) This was a great hands-on way to strengthen my understanding of containers, automation, and backend design. 🔗 Check it out here: https://lnkd.in/dE5kVceV Would love to hear your thoughts or feedback! #Python #Docker #DevOps #BackendDevelopment #OpenSource #LearningByBuilding
To view or add a comment, sign in
-
Docker Basic Info: Part- 1 🚀 Understanding Docker: Container vs Image (Simple but Powerful Concept) When I first started learning Docker, one idea made everything much clearer: 👉 the difference between an Image and a Container. Let me break it down in a clean and practical way 👇 🔹 Container vs Image Image → A blueprint (read-only template) Container → A running instance (actual execution environment) 💡 Think in programming terms: Image = Class Container = Object (instance of that class) One image can create multiple containers. For example, a single Ubuntu image can run 10 containers at the same time — all isolated from each other. 🔹 Why is it called an "Image"? The term comes from system snapshots (disk images). An image contains everything needed to run an application: OS layer (minimal) Application code Dependencies & libraries Runtime & configurations 📸 It’s like a frozen snapshot of a complete system. 🔹 What happens when you run a container? When you execute: docker run ubuntu:20.04 Internally, Docker does: Reads image layers Adds a writable layer on top Creates isolated environment (namespaces) Applies resource limits (cgroups) Starts the main process 🔹 Copy-on-Write (Why Docker is Efficient) Images are read-only layers Containers add a thin writable layer ✔ Multiple containers share the same base image ✔ Saves disk space ✔ Each container remains independent Example: If Container A modifies a file → Container B will NOT see that change 🔹 How Isolation Works Docker uses Linux kernel features: Namespaces (Isolation): Separate processes Separate network Separate filesystem Separate hostname Cgroups (Control): Limit memory (e.g., 512MB) Limit CPU usage Control disk I/O 🔹 Key Insight ❗ A container is NOT a virtual machine It’s just a process running on the host OS — but with restricted visibility and resources. That’s why containers are: Lightweight ⚡ Fast 🚀 Scalable 📈 🔹 Simple Example docker run -d --name web1 nginx docker run -d --name web2 nginx What happens: One image (nginx) Two containers Two isolated processes Separate storage & network 🔹 Docker Ecosystem (Big Picture) Docker is not just one tool — it’s a platform: Docker CLI → user interaction Docker Engine → core system Docker Daemon → manages everything containerd → manages container lifecycle runc → low-level runtime (talks to kernel) Docker Hub → image repository Docker Compose → multi-container orchestration 🧠 Think of it as a chain where each component has a role. 🔹 Final Thought Understanding this one concept: 👉 Image = Blueprint 👉 Container = Running Instance …makes the entire Docker ecosystem much easier to grasp.
To view or add a comment, sign in
-
Hi Everyone, 3+ years in and I'm still finding gaps in my Docker fundamentals. Not because I didn't use Docker, I use it every day. But there's a difference between using a tool and actually understanding it. Preparing for my CKA pushed me to go back to basics and fill those gaps. Here's what's worth revisiting even if you've been doing this for a while 👇 🐳 Dockerfile instructions you think you know Most engineers use these daily but never think twice about the implications: ```dockerfile FROM python:3.9 ← every MB here ends up in prod. use slim. WORKDIR /app ADD . . ← also copies .git, caches, secrets. use COPY + .dockerignore RUN pip install flask EXPOSE 5000 CMD ["python", "app.py"] ``` Small choices at build time = big consequences at scale. ⚙️ CMD vs ENTRYPOINT — deceptively simple, easy to get wrong Everyone knows both "run something at container start." But the real distinction: ```dockerfile ENTRYPOINT ["python"] # defines what the container IS CMD ["app.py"] # defines the default behaviour — overridable ``` Where this bites you in production, Kubernetes maps `command:` to ENTRYPOINT and `args:` to CMD. Misconfigure this in a pod spec and you're in for a fun debugging session at 2am 🙃 📦 Multi-stage builds — still underused in most teams I've seen The concept is simple. The execution is cleaner than most people realise: ```dockerfile # Stage 1 — build environment FROM python:3.9 AS builder WORKDIR /app COPY . . RUN pip install -r requirements.txt # Stage 2 — production image FROM python:3.9-slim COPY --from=builder /app /app CMD ["python", "app.py"] ``` Your build tools, test dependencies, and intermediate files never make it to the final image. 900MB → 120MB is not unusual. At scale, that's real money, real security improvement, and real deployment speed. Going through Varun Joshi's CKA Certification Course 2025 right now and documenting what I learn. Even as someone who's worked with containers for years, structured learning always surfaces things you glossed over. If you're in a similar spot, experienced but preparing for CKA, what's the area you've found most worth revisiting? 👇 🔗 Course: Cloud With VarJosh – CKA Certification Course 2025 Youtube :- https://lnkd.in/esk3khMB Github:- https://lnkd.in/e8wQ7Fk9 #Kubernetes #Docker #CKA #DevOps #CloudNative #LearningInPublic #Containers
To view or add a comment, sign in
-
🎉 I Just Built & Ran My First Docker Image – Here’s What I Learned 🐳 Hey everyone, After learning the basics of Docker containers in my previous posts, today I took the next big step. I moved from just using other people’s containers to building and running my own — and it feels amazing! As a Full Stack Developer learning DevOps, this was a real milestone for me. What I Built I created a simple Python Flask web application and packaged it into my very first custom Docker image. Here’s the flow I followed: Created a small Flask app (app.py) that shows a welcome message. Added a requirements.txt file. Wrote my first Dockerfile (using the 80/20 rule – only the important commands). Built the image with: docker build -t python-app-img . Ran the container with: docker run -d -p 5000:5000 python-app-img Opened http://localhost:5000 in my browser — and it worked! ✅ Real-World Value (Why This Matters). In real companies, you can’t keep installing dependencies and configuring servers manually on every machine. With one well-written Dockerfile: Every developer gets the exact same environment No more “It works on my machine” problems Faster onboarding for new team members Consistent and reliable deployments. This small Python app I built today is exactly the kind of practical exercise that helps you understand how production applications are containerized. My Key Takeaway Building your first Docker image is the moment you stop being just a user of technology and start becoming a creator of reliable systems. It’s not complicated once you do it step by step. If you’re also learning Docker or DevOps, tell me — what was your first Docker project? Or what’s the biggest challenge you’re facing right now? I read and reply to every comment. Let’s grow together! 👇 #Docker #Dockerfile #FirstDockerImage #DevOps #LearningInPublic #DockerBeginner #FullStackDeveloper #TechJourney #SystemEngineering #CloudComputing #80_20Rule
To view or add a comment, sign in
-
2 days of debugging. Fixed by adding 1 line. 😅 If you use FastAPI + GitHub Actions, read this. I was building a Tier 2 Event Pipeline with FastAPI microservices. Everything worked perfectly on my machine — 8/8 tests passing, 96% coverage, Docker builds clean. Pushed to GitHub Actions. Two jobs failed instantly. ❌ Python Checks (ingestion-service) — FAILED ❌ Python Checks (fusion-engine-service) — FAILED The error: RuntimeError: The starlette.testclient module requires the httpx package. Both services used TestClient from FastAPI in their tests. TestClient has a hard runtime dependency on httpx. But httpx was nowhere in my pyproject.toml. So why did it work locally? My Windows machine already had httpx installed — silently pulled in as a transitive dependency from earlier work. I never noticed because it was never declared. GitHub Actions runners are clean Ubuntu containers. They install only what you declare. Nothing more. Classic "works on my machine" — and I had no idea. ───────────────────── The fix? One line in each pyproject.toml: [project.optional-dependencies] dev = [ "pytest>=8.3,<10.0", "pytest-cov>=5.0,<6.0", + "httpx>=0.27,<1.0", ← this ] ───────────────────── All jobs green. ✅ The real lessons I'm keeping with me: ① If your test imports it → it's a dev dependency. No exceptions. ② Coverage at 2% and 39% wasn't the bug. It was a symptom. The tests couldn't even be collected. Always read the FIRST error, not the summary. ③ Before every push, test in a clean venv: python -m venv .venv-clean source .venv-clean/bin/activate pip install -e ".[dev]" pytest tests This single habit would have saved me 2 days. If you're using FastAPI + TestClient + GitHub Actions — double-check your pyproject.toml right now. You might have the same silent bomb waiting. Ever lost hours to a "works on my machine" bug? Drop it in the comments 👇 #Python #FastAPI #GitHubActions #CI #DevOps #SoftwareEngineering #LessonsLearned #OpenSource #Testing
To view or add a comment, sign in
-
This my second hand on project demonstrates a complete Continuous Integration (CI) pipeline using GitHub Actions for a Python Flask application. The pipeline automates every step from code push to Docker image delivery ensuring fast, reliable, and consistent builds. Key Highlights: Trigger: On every push to the main branch, GitHub Actions automatically starts the workflow. Steps: Checkout Code – Clones the repository. Setup Python 3.9 – Configures the runtime environment. Install Dependencies – Installs Flask and pytest. Run Tests – Executes unit tests to validate the app. Build Docker Image – Packages the app using Docker. Push to Docker Hub – Publishes versioned images tagged with commit SHA. Outcome: Every commit produces a tested, containerized, and registry‑ready image. Optional Deployment: The Docker image can be deployed to Kubernetes, scaling to multiple replicas for high availability. Tools Used: GitHub Actions • Python 3.9 • Flask • pytest • Docker • Docker Hub • Kubernetes. The diagram visually shows the CI/CD flow: Developer pushes code → GitHub detects change → Actions workflow triggers. Sequential steps: Checkout → Setup Python → Install → Test → Build → Push. Final output: Docker image pushed to Docker Hub, ready for Kubernetes deployment. #Linkedin YouTubeLearningMateAmazon Web Services (AWS) #Project #handson
To view or add a comment, sign in
-
-
What does a multi-agent comic builder look like when deployed across clouds? This dev uses Google's ADK and Gemini LLM to build a low-code Python app, deploy it to AWS Lambda, and run a full agent pipeline that outputs comic book HTML. { author: William McLean + Google Developer Experts } https://lnkd.in/eKaD2nt6
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development