Day 9 - "It works on my machine." Docker fixes that sentence permanently. Here's how — with a real 3-service app, not a toy tutorial. 🚀TechFromZero Series - DockerFromZero This isn't a Hello World. It's a real multi-container application: 📐 Client → FastAPI (Python) → MongoDB (Database) → Redis (Cache) — all orchestrated with Docker Compose 🔗 The full code (with step-by-step commits you can follow): https://lnkd.in/dtnykq35 🧱 What I built (step by step): 1️⃣ Project scaffold — FastAPI app with async endpoints and health check 2️⃣ Dockerfile — FROM, COPY, RUN, EXPOSE, CMD with layer caching explained 3️⃣ .dockerignore — keep secrets and junk out of images 4️⃣ MongoDB connection — async motor driver, Docker DNS (service names, not localhost) 5️⃣ Weather API client — httpx async calls to OpenWeatherMap from inside a container 6️⃣ Full CRUD endpoints — log, list, stats, filter, delete weather data 7️⃣ Docker Compose — 3 services, health checks, depends_on, named volumes, custom network 8️⃣ Redis caching — 5-minute TTL, sub-millisecond cache hits vs 300ms API calls 9️⃣ README — architecture diagram, Docker cheat sheet, step-by-step guide 💡 Every file has detailed comments explaining WHY, not just what. Written for any beginner who wants to learn Docker by reading real code — with full clarity on each step. 👉 If you're a beginner learning Docker, clone it and read the commits one by one. Each commit = one concept. Each file = one lesson. Built from scratch, so nothing is hidden. 🔥 This is Day 9 of a 50-day series. A new technology every day. Follow along! 🌐 See all days: https://lnkd.in/dhDN6Z3F #TechFromZero #Day9 #Docker #DockerCompose #FastAPI #MongoDB #Redis #Python #Containers #DevOps #LearnByDoing #OpenSource #BeginnerGuide #100DaysOfCode #CodingFromScratch
Docker Fixes 'It Works on My Machine' with Real App Example
More Relevant Posts
-
I used to Just “Make Things Work”… Until I Didn’t Early in my journey, my goal was simple: Build APIs. Make them run. Done. But then I started asking different questions… -> What happens when 1,000 users hit this API at once? -> Why does the system slow down? -> How do real-world systems handle scale? That’s when things changed. I moved from just writing code to actually designing systems. I started working with: - Building REST APIs with proper routing, validation, and async handling using FastAPI, Flask, and Django - Implementing JWT-based authentication (access/refresh tokens, middleware, protected routes) - Containerizing services with Docker and managing PostgreSQL with optimized queries, indexing, and connection pooling - Handling real-world issues like API latency, database bottlenecks, and service reliability And more importantly… ⚡ I began understanding performance, scalability, and real-world challenges. Now, I don’t just build APIs. I think about how they behave under pressure. Still learning. Still improving. But the mindset shift made all the difference. #BackendDevelopment #SystemDesign #FastAPI #Docker #LearningJourney #python #django #redis #postgresql #backend #developer
To view or add a comment, sign in
-
I reduced my API response time from 2.3s to 140ms. No Redis. No CDN. No caching layer. Just 4 changes to my Django REST Framework setup that most tutorials never mention. N+1 queries everywhere. My serializer accessed post.author.name on every row. 100 posts = 101 database queries. One select_related('author') brought it down to 1. Response time: 2.3s to 800ms instantly. Using ModelSerializer for read endpoints. ModelSerializer builds fields dynamically on every request. It's up to 377x slower than raw Python dicts. Switched read-only endpoints to serializers.Serializer with explicit fields. Another 40% gone. No pagination on list endpoints. Returning the entire table. 10,000 rows. Every request. Added CursorPagination, constant-time queries regardless of dataset size. OFFSET-based pagination breaks at high page numbers. Cursor doesn't. Fetching fields I never used. Serializer returned 15 fields. Frontend used 6. Added .only() and trimmed the serializer. 2.3s to 140ms. Same server. Same database. Same $12/month VPS. The bottleneck was never my infrastructure. It was my code. Run queryset.explain(analyze=True) on your slowest endpoint. You'll probably find the same mistakes. Which of these have you tried? #Django #Python #API #WebPerformance #BuildInPublic
To view or add a comment, sign in
-
Been a bit offline but for a good reason. I’ve been building something I’m really proud of "𝗩𝗶𝘀𝗶𝘁 𝗪𝗵𝗲𝗿𝗲 𝗜𝘁 𝗖𝗼𝘂𝗻𝘁𝘀" a 𝗺𝗶𝗰𝗿𝗼𝘀𝗲𝗿𝘃𝗶𝗰𝗲𝘀-based 𝗩𝗶𝘀𝗶𝘁𝗼𝗿 𝗔𝗻𝗮𝗹𝘆𝘁𝗶𝗰𝘀 𝗦𝘆𝘀𝘁𝗲𝗺. Real time visitor tracking on the surface, but the real focus was under the hood: - Scalable microservices architecture - Redis with persistence - Nginx load balancing - Docker-based IaC 🏗 𝗛𝗶𝗴𝗵𝗹𝗶𝗴𝗵𝘁𝘀 🐍 Flask backend + Google Maps API 🔴 Persistent Redis (survives restarts) ⚖️ Nginx round-robin traffic 🛡️ Slim images for better security ⚡ Multi-stage builds for fast, lightweight deployments ⚙️ Config management with .env 🌐 Tech Stack Python Flask • Redis • Nginx • Tailwind CSS • Docker • Google Maps API Check out my code and more details of the features on my GitHub: https://lnkd.in/dRzVdZnz CoderCo #DevOps #Docker #CloudInfrastructure #Python #Redis #Nginx #WebDevelopment #ITCareer
To view or add a comment, sign in
-
🔔 I built a production push notification system in Flask — and hit every possible wall doing it on Windows Server 2022. Here's what actually works. Stack: Flask → Celery → Redis (Docker) → Firebase FCM ━━━━━━━━━━━━━━ The idea is simple: your API should never wait for a notification to send. Flask returns 200 immediately. The job goes to a background worker. Flask → queues task in Redis → Celery worker → Firebase FCM 🔔 Each layer has one job. If FCM is slow or needs retries, your users never feel it. ━━━━━━━━━━━━━━ 🚨 The 5 mistakes that cost me hours: ❌ broker="redis://redis:6379" — works inside Docker, fails from your machine. Use localhost. ❌ @shared_task — binds to a default Celery instance with no broker. Always use @celery.task. ❌ Firebase init only in Flask — Celery is a separate process. It won't see Firebase unless you init it in extensions.py too. ❌ Missing include=["routes.module"] in Celery config — worker starts with an empty task list and silently drops everything. ❌ .delay() inside an on_success callback — exceptions get swallowed. The task appears to queue but never does. ━━━━━━━━━━━━━━ ⚙️ The fix that ties it all together: One extensions.py as the single source of truth — Firebase init + Celery instance both live there. Import it everywhere, never re-create either. ▶️ Run 3 things simultaneously: • python main.py • celery -A extensions.celery worker --loglevel=info --pool=solo • Redis Docker container --pool=solo is required on Windows. The default prefork pool crashes silently. ━━━━━━━━━━━━━━ Full step-by-step guide (WSL2 setup, Docker config, task definition, route patterns) in the first comment 👇 #Python #Flask #Celery #Redis #Firebase #Docker #BackendDevelopment
To view or add a comment, sign in
-
Built a production-ready messaging system in Django designed to handle both private chats and group conversations with a clean, scalable architecture. Features include: • Direct messages + group chats • Message reactions and read receipts • File, image, and video attachments • Admin/owner roles for moderation • Edit/delete permissions • Online presence + last seen support • Redis-ready for real-time messaging and WebSockets What I enjoyed most was designing the data models to keep things flexible, fast, and maintainable as the product grows. Building systems like this reminds me that great software starts with thoughtful architecture. Always learning, always building. #Django #Python #BackendDevelopment #SoftwareEngineering #WebDevelopment #Redis #PostgreSQL #Tech #founder #startup
To view or add a comment, sign in
-
-
📦 Day 36 #90DaysOfDevOps 🚨 From “It works on my machine” → to a fully Dockerized 2-tier app Today’s learning hit different. I built and containerized a Flask + MySQL application, and what looked simple at first quickly turned into a deep dive into how things actually work behind the scenes. 💥 It started with a simple goal: “Run my Flask app inside Docker.” But then… ❌ My app couldn’t connect to MySQL → Turns out, localhost inside a container ≠ my machine ❌ Build kept failing with pkg-config not found → Learned that some Python packages (like mysqlclient) need system-level dependencies ❌ Even after fixing everything, app still crashed → MySQL wasn’t “ready” when Flask started 🔍 Here’s what I implemented to fix it: ✅ Created a custom Docker network for container communication ✅ Replaced localhost with service name (db) ✅ Installed required system packages (gcc, libmysqlclient-dev, pkg-config) ✅ Added healthchecks using mysqladmin ping ✅ Used depends_on with service_healthy to ensure proper startup order ✅ Secured the container by using a non-root user ✅ Managed configs using environment variables ⚙️ Final setup: Flask app running in one container MySQL running in another Both connected via Docker network Fully reproducible with Docker 📦 Docker Hub (pull & run): https://lnkd.in/gt3749CC 📁 GitHub: https://lnkd.in/gZ5g623i 💡 Biggest takeaway: Containerization is not just about “Docker build & run” — it’s about understanding: networking dependencies startup timing and debugging real failures This project felt like a real DevOps scenario rather than just a tutorial. If you’ve faced similar issues while working with Docker, would love to hear your experience 👇 #Docker #DevOps #Flask #MySQL #LearningInPublic #BuildInPublic #OpenToWork #dockerproject #TrainWithShubham
To view or add a comment, sign in
-
-
cache.get('user_123') never touches Redis directly. Something else runs first. The assumption is Django's cache API is a thin wrapper. Call get, retrieve from storage and done. The reality is that every cache call passes through a pipeline before a single byte touches the backend. Here's what happens: 1. Every cache backend in Django inherits from BaseCache. BaseCache owns the entire caching contract - get, set, delete, incr, get_or_set. 2. The concrete backend like RedisCache, MemcachedCache implements only the storage-specific parts. The abstraction layer runs first. Always! 3. The first thing BaseCache does is transform the key. 4. Every key passes through make_key() — which prepends KEY_PREFIX and VERSION from settings. The key 'user_123' becomes ':1:user_123' in storage by default. Cache miss on a key that exists → check the actual key in storage first! Django's cache versioning lets the entire cache be invalidated by bumping VERSION in settings. Old keys still exist in storage, until eviction clears them. The cache API feels simple because BaseCache is doing the hard work invisibly. Have you ever debugged a cache miss only to find the key was there, just under a different name? #Python #Django #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
-
Recently, while setting up a Python-based auth service using FastAPI and PostgreSQL, I ran into an issue that many of us have probably faced but don’t always talk about. The application was failing with a database connection error, even though everything “looked” correct. The root cause turned out to be something simple but important — mixing Docker-based configuration with a local development setup. Using postgres as a hostname works perfectly inside Docker networks, but when running the app locally with uvicorn, the correct host should be localhost. Small detail, but it completely breaks the connection if overlooked. Another issue I encountered was with SQLAlchemy setup. My models were importing Base, but it wasn’t defined properly in the database module. This led to an import error during application startup. Fixing it required properly initializing declarative_base() and ensuring models were correctly registered. A couple of key takeaways from this experience: > Environment-specific configurations matter more than we think > Avoid hardcoding values — always rely on environment variables > Don’t connect to the database during module import > Ensure ORM base and models are structured cleanly What I appreciated most was how these small fixes significantly improved the overall architecture. Moving toward a cleaner separation of config, database, repositories, and services makes the system more scalable and production-ready. These are the kinds of practical issues that don’t always show up in tutorials but are very real in day-to-day development. If you’re working with FastAPI, SQLAlchemy, or setting up microservices, I’d be curious to know what common pitfalls you’ve run into. #Python #FastAPI #PostgreSQL #SQLAlchemy #BackendDevelopment #Microservices #SoftwareEngineering #Debugging #LearningJourney
To view or add a comment, sign in
-
Day 13 - Your server costs $0 when nobody's using it. That's not a bug — that's serverless. 🚀TechFromZero Series - LambdaFromZero This isn't a Hello World. It's a real serverless image processing pipeline: 📐 Client → API Gateway → Lambda handler(event, context) → Pillow → base64 response 🔗 The full code (with step-by-step commits you can follow): https://lnkd.in/dTwhP4Ty 🧱 What I built (step by step): 1️⃣ Project scaffold — Python venv, Pillow, Flask, pytest 2️⃣ Sample image — generated programmatically with Pillow (no downloads) 3️⃣ First Lambda handler — the handler(event, context) contract 4️⃣ Image processor — resize with base64 encoding/decoding 5️⃣ More operations — thumbnail, grayscale, rotate, and blur 6️⃣ Wire handler to processor — event parsing, validation, error responses 7️⃣ Test events — JSON files that simulate real API Gateway requests 8️⃣ Local server — Flask app that does exactly what API Gateway does 9️⃣ S3 trigger handler — auto-process images on upload 🔟 Unit tests — 32 pytest tests for handler and processor 1️⃣1️⃣ SAM template — real AWS deployment config (no account needed to learn) 1️⃣2️⃣ Documentation — README with architecture, quick start, and step guide 💡 Every file has detailed comments explaining WHY, not just what. Written for any beginner who wants to learn AWS Lambda by reading real code — with full clarity on each step. No AWS account needed. Everything runs locally. The Flask server simulates API Gateway, test events simulate S3 triggers, and pytest verifies it all works. 👉 If you're a beginner learning serverless, clone it and read the commits one by one. Each commit = one concept. Each file = one lesson. Built from scratch, so nothing is hidden. 🔥 This is Day 13 of a 50-day series. A new technology every day. Follow along! 🌐 See all days: https://lnkd.in/dhDN6Z3F #TechFromZero #Day13 #AWSLambda #Serverless #Python #Pillow #LearnByDoing #OpenSource #BeginnerGuide #100DaysOfCode #CodingFromScratch #AWS #CloudComputing #FaaS
To view or add a comment, sign in
-
-
Here's an MCP server for CadQuery I vibe coded with Anthropic Claude code. CadQuery is a parametric CAD library for Python. AI models are spatial reasoning blind: they write the code to generate Parts A and B, but they don't know if Part A is actually next to Part B as it's supposed to be (or if it's even the shape they think it's going to be). That's where this MCP server comes in: an AI mode/agent sends it CadQuery it's written, then the server interprets the cq and generates PNG renderings or 3D model files (3MF and GLB). The model, if it's multimodal can then use its computer vision components to evaluate whether the thing looks right, and if it doesn't, where the problem is, which it then uses to iterate its design. This is a thin wrapper, but it kinda works. I have one running on Google Cloud Run. Here you go: https://lnkd.in/g6XREpn4
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development