📦 Day 36 #90DaysOfDevOps 🚨 From “It works on my machine” → to a fully Dockerized 2-tier app Today’s learning hit different. I built and containerized a Flask + MySQL application, and what looked simple at first quickly turned into a deep dive into how things actually work behind the scenes. 💥 It started with a simple goal: “Run my Flask app inside Docker.” But then… ❌ My app couldn’t connect to MySQL → Turns out, localhost inside a container ≠ my machine ❌ Build kept failing with pkg-config not found → Learned that some Python packages (like mysqlclient) need system-level dependencies ❌ Even after fixing everything, app still crashed → MySQL wasn’t “ready” when Flask started 🔍 Here’s what I implemented to fix it: ✅ Created a custom Docker network for container communication ✅ Replaced localhost with service name (db) ✅ Installed required system packages (gcc, libmysqlclient-dev, pkg-config) ✅ Added healthchecks using mysqladmin ping ✅ Used depends_on with service_healthy to ensure proper startup order ✅ Secured the container by using a non-root user ✅ Managed configs using environment variables ⚙️ Final setup: Flask app running in one container MySQL running in another Both connected via Docker network Fully reproducible with Docker 📦 Docker Hub (pull & run): https://lnkd.in/gt3749CC 📁 GitHub: https://lnkd.in/gZ5g623i 💡 Biggest takeaway: Containerization is not just about “Docker build & run” — it’s about understanding: networking dependencies startup timing and debugging real failures This project felt like a real DevOps scenario rather than just a tutorial. If you’ve faced similar issues while working with Docker, would love to hear your experience 👇 #Docker #DevOps #Flask #MySQL #LearningInPublic #BuildInPublic #OpenToWork #dockerproject #TrainWithShubham
Dockerizing Flask + MySQL App with Custom Network and Dependencies
More Relevant Posts
-
Recently, while setting up a Python-based auth service using FastAPI and PostgreSQL, I ran into an issue that many of us have probably faced but don’t always talk about. The application was failing with a database connection error, even though everything “looked” correct. The root cause turned out to be something simple but important — mixing Docker-based configuration with a local development setup. Using postgres as a hostname works perfectly inside Docker networks, but when running the app locally with uvicorn, the correct host should be localhost. Small detail, but it completely breaks the connection if overlooked. Another issue I encountered was with SQLAlchemy setup. My models were importing Base, but it wasn’t defined properly in the database module. This led to an import error during application startup. Fixing it required properly initializing declarative_base() and ensuring models were correctly registered. A couple of key takeaways from this experience: > Environment-specific configurations matter more than we think > Avoid hardcoding values — always rely on environment variables > Don’t connect to the database during module import > Ensure ORM base and models are structured cleanly What I appreciated most was how these small fixes significantly improved the overall architecture. Moving toward a cleaner separation of config, database, repositories, and services makes the system more scalable and production-ready. These are the kinds of practical issues that don’t always show up in tutorials but are very real in day-to-day development. If you’re working with FastAPI, SQLAlchemy, or setting up microservices, I’d be curious to know what common pitfalls you’ve run into. #Python #FastAPI #PostgreSQL #SQLAlchemy #BackendDevelopment #Microservices #SoftwareEngineering #Debugging #LearningJourney
To view or add a comment, sign in
-
Sunday ship: @perryts/postgres is out. A pure-TypeScript Postgres driver that speaks the wire protocol directly. No libpq. No native addons. No FFI. Why another one? → Every Node Postgres driver worth using wraps libpq or ships a platform-specific .node addon. That's a dead end for AOT - Perry compiles TypeScript to a statically-linked native binary via LLVM, and there's no V8 at execution time to host a C addon into. → The Perry-native Postgres GUI this drives (Tusk) needed things most drivers throw away for ergonomics: exact numeric (not float), full column metadata (attnum, tableOid, typmod), structured errors with every documented ErrorResponse field, and raw row bytes on demand. So: one TypeScript source, three runtime targets. → Node.js 22+ → Bun 1.3+ → Perry AOT → 4.6 MB static binary, 1.8 MB RSS, single-digit-ms cold start Honest performance story: V8's JIT beats Perry-native on per-query wall time in a warm long-running process. Perry wins everywhere else — cold start, memory footprint, deploy size, and the platforms Node and Bun can't reach at all (CLIs, serverless cold paths, mobile, embedded Linux). What's in the box: SCRAM-SHA-256 / MD5 / cleartext auth, TLS with mid-stream upgrade, simple + extended query, 20 type codecs, exact numeric via a Decimal wrapper, structured PgError, cancel protocol, LISTEN/NOTIFY, connection pool, transactions, libpq URLs, PG* env vars. MIT. Feedback welcome. https://lnkd.in/dyeDTJG7
To view or add a comment, sign in
-
Day 9 - "It works on my machine." Docker fixes that sentence permanently. Here's how — with a real 3-service app, not a toy tutorial. 🚀TechFromZero Series - DockerFromZero This isn't a Hello World. It's a real multi-container application: 📐 Client → FastAPI (Python) → MongoDB (Database) → Redis (Cache) — all orchestrated with Docker Compose 🔗 The full code (with step-by-step commits you can follow): https://lnkd.in/dtnykq35 🧱 What I built (step by step): 1️⃣ Project scaffold — FastAPI app with async endpoints and health check 2️⃣ Dockerfile — FROM, COPY, RUN, EXPOSE, CMD with layer caching explained 3️⃣ .dockerignore — keep secrets and junk out of images 4️⃣ MongoDB connection — async motor driver, Docker DNS (service names, not localhost) 5️⃣ Weather API client — httpx async calls to OpenWeatherMap from inside a container 6️⃣ Full CRUD endpoints — log, list, stats, filter, delete weather data 7️⃣ Docker Compose — 3 services, health checks, depends_on, named volumes, custom network 8️⃣ Redis caching — 5-minute TTL, sub-millisecond cache hits vs 300ms API calls 9️⃣ README — architecture diagram, Docker cheat sheet, step-by-step guide 💡 Every file has detailed comments explaining WHY, not just what. Written for any beginner who wants to learn Docker by reading real code — with full clarity on each step. 👉 If you're a beginner learning Docker, clone it and read the commits one by one. Each commit = one concept. Each file = one lesson. Built from scratch, so nothing is hidden. 🔥 This is Day 9 of a 50-day series. A new technology every day. Follow along! 🌐 See all days: https://lnkd.in/dhDN6Z3F #TechFromZero #Day9 #Docker #DockerCompose #FastAPI #MongoDB #Redis #Python #Containers #DevOps #LearnByDoing #OpenSource #BeginnerGuide #100DaysOfCode #CodingFromScratch
To view or add a comment, sign in
-
-
🚀 Just shipped my second backend project — a production-grade Task Management API! 🔗 Live: https://lnkd.in/g_MYFbxs 🐙 GitHub: https://lnkd.in/gppGbTyC ⚙️ What I built: → JWT authentication (signup + login) → Full CRUD on tasks → Role-based access control (user / admin) → Paginated task listing → Each user sees only their own data → Dockerized for local + production → Deployed on Render with Supabase PostgreSQL 🛠️ Tech Stack: FastAPI · PostgreSQL · SQLAlchemy · Pydantic v2 · JWT · Bcrypt · Docker · Render · Supabase This project taught me how real backend systems are structured — not just "make it work" but make it secure, scalable, and deployable. #FastAPI #Python #Backend #Docker #PostgreSQL #JWT #OpenToWork #BuildInPublic #100DaysOfCode
To view or add a comment, sign in
-
-
I spent 10 days building an AI agent from scratch. It just triaged a real GitHub issue in 3.88 seconds. Label prediction F1: 0.967. Here's how 10 days changed everything I thought I knew. It started with a problem I kept seeing. Open source maintainers get buried in issues. Half are duplicates. Most are mislabeled. Nobody knows which file to look at. And the person who could fix it fastest doesn't even know the issue exists. I thought: what if a bot could read your entire codebase and triage every issue the moment it lands? Day 1, I didn't even understand how GitHub talks to external apps. Spent hours learning what webhooks are, what HMAC signatures do, why smee.io exists. Got my first real payload at midnight. Stared at the terminal like I'd won something. Day 3, I learned that splitting code by character count is useless. A function sliced in half means nothing to a retrieval system. Built three separate chunkers — tree-sitter for code, heading-aware for docs, thread-aware for issue conversations. Day 5, the graph layer clicked. Issues link to PRs. PRs link to files. Files link to other issues. Standard RAG ignores all of that. I wrote a 1-hop expander that follows those edges after retrieval. Two SQL queries. Max 20 neighbor chunks. No N+1. Day 6, the bot posted its first real GitHub comment. Watched the Celery logs. Saw the webhook arrive. Saw retrieval run. Saw the comment appear. I didn't touch anything. It just worked. Day 8, production broke me. Supabase needed three separate fixes to connect. Statement cache size. Transaction pooler URL. IPv6. "Just use Postgres" is the biggest lie in deployment. Then came issue #28. I opened a real issue on my own repo. A question about missing authentication docs. 60 seconds later the bot replied. It cited the exact file — backend/app/api/search.py. It pulled the real env variable names from my codebase. GITHUB_APP_ID. VOYAGE_API_KEY. Not hallucinated. Retrieved from the actual code. Read it. Reasoned from it. Cited it. That's the moment it stopped being a project and became something that actually knows your code. The final numbers: → Label F1: 0.967 → Precision: 0.950 / Recall: 1.000 → Latency: 3.88s avg, 4.24s p95 → 150 tests. 0 failing. 102 commits. → Deployed: Render + Supabase + Upstash + Qdrant Cloud + Vercel 10 days ago I was mass-applying to internships hoping someone would notice my resume. Today I have a deployed AI agent, real eval metrics, and a build-in-public series that taught me more than any course ever did. The resume got better. But that's not the point. The point is I can build things now. If you want the full architecture breakdown: → Comment "SHIPPED" → Connect with me
To view or add a comment, sign in
-
Introducing mcp-assert: 𝕕𝕖𝕥𝕖𝕣𝕞𝕚𝕟𝕚𝕤𝕥𝕚𝕔 testing for MCP servers Most MCP tools return structured data: 𝐟𝐢𝐥𝐞 𝐜𝐨𝐧𝐭𝐞𝐧𝐭𝐬, 𝐪𝐮𝐞𝐫𝐲 𝐫𝐞𝐬𝐮𝐥𝐭𝐬, 𝐜𝐨𝐝𝐞 𝐥𝐨𝐜𝐚𝐭𝐢𝐨𝐧𝐬. The correct output is knowable in advance. You don't need an LLM to grade it: You need assert.Equal. mcp-assert is a single binary that connects to any MCP server (Go, TypeScript, Python, Rust, Java), calls your tools, and asserts the results. Define assertions in YAML, run them in CI. No SDK, no LLM, no API costs. 𝐙𝐞𝐫𝐨 𝐭𝐨 𝐟𝐮𝐥𝐥 𝐜𝐨𝐯𝐞𝐫𝐚𝐠𝐞 𝐢𝐧 𝐨𝐧𝐞 𝐜𝐨𝐦𝐦𝐚𝐧𝐝: mcp-assert init evals --server "my-server" Connects to your server, discovers every tool, generates assertions, captures baselines. Edit the YAMLs to taste, then run them forever. 𝐖𝐡𝐚𝐭 𝐢𝐭 𝐜𝐨𝐯𝐞𝐫𝐬: ▫️ 15 deterministic assertion types (contains, json_path, regex, file_unchanged, net_delta, etc.) ▫️ Trajectory assertions: validate that agents call tools in the correct order, with safety gates and absence checks. No server needed. ▫️Bidirectional MCP: test client capabilities (roots, sampling, elicitation), not just server tools ▫️ Reliability metrics (pass@k / pass^k), regression detection, snapshot testing ▫️ Docker isolation for write-tests ▫️ Same YAML, different servers: test your Go and Python implementations produce identical results 𝐎𝐧𝐞-𝐥𝐢𝐧𝐞 𝐂𝐈: - uses: blackwell-systems/mcp-assert-action@v1 with: suite: evals/ We've tested it against 18 server suites across 3 languages with 174 assertions and found real bugs in real servers along the way. 𝐈𝐧𝐬𝐭𝐚𝐥𝐥 𝐡𝐨𝐰𝐞𝐯𝐞𝐫 𝐲𝐨𝐮 𝐰𝐚𝐧𝐭: npx @blackwell-systems/mcp-assert pip install mcp-assert brew install blackwell-systems/tap/mcp-assert Open source, MIT licensed. GitHub: https://lnkd.in/geE_Fhck docs: https://lnkd.in/gw69j42G If you're building MCP servers, I'd love to hear what you think. #MCP #ModelContextProtocol #OpenSource #AIAgents #Testing #DevTools
To view or add a comment, sign in
-
5 weeks ago, I couldn't find a task manager I liked. So I built one. Today — I'm shipping it. This is Part 5 of my FastAPI series. The final one. And the one with the GitHub link. Here's everything I built over 5 weeks: > Week 1 → Set up FastAPI, built first endpoints, discovered auto-generated Swagger docs > Week 2 → Connected MySQL via SQLAlchemy — data finally persisted after server restart > Week 3 → Added user registration + JWT authentication — passwords hashed with bcrypt > Week 4 → Linked tasks to users — every endpoint protected, every task owned > Week 5 → Cleaned up secrets with .env, wrote the README, shipped to GitHub What the final project includes: 1. User registration & login 2. JWT token authentication 3. Full CRUD — create, read, update, delete tasks 4. Tasks tied to authenticated users only 5. MySQL database with SQLAlchemy ORM 6. Auto-generated Swagger UI docs 7. Clean project structure — production ready The biggest lesson from this series? Consistency beats perfection. I hit bugs every single week. Bcrypt version conflicts. Swagger auth issues. .env file not loading. Every bug taught me more than any tutorial ever could. GitHub repo link in the comments. Clone it. Fork it. Break it. Make it yours. What should I build and document next? Drop your suggestions below 👇 #FastAPI #Python #MySQL #JWT #BackendDevelopment #BuildInPublic #OpenSource #GitHub #APIs #100DaysOfCode
To view or add a comment, sign in
-
-
Built a recommendation engine for a large-scale Odoo setup recently, and the hardest part wasn’t the math, it was making it survive production. A few choices that mattered: * Used PostgreSQL advisory locks so heavy ALS training jobs don’t collide with each other. Because nothing says “fun” like two background workers trying to do the same expensive thing at once. * Used ALS collaborative filtering with sparse CSR matrices so the model could scale without treating memory like an unlimited resource. * Skipped slow ORM writes for bulk upserts with "execute_values", because millions of rows and "create()" are not friends. * Added an embeddings + cosine similarity fallback for new products, so the system can still recommend items even when sales history is basically nonexistent. The model matters, but production-readiness mattered more here: concurrency control, fast writes, low memory usage, and a fallback for cold-start cases. #Odoo #PostgreSQL #Python #RecommendationSystems #BackendEngineering
To view or add a comment, sign in
-
🚀 Day 4 of “Trying to Become a Backend Developer Without Breaking My Laptop” Today’s episode: The Day I Finally Understood Where My Data Lives 🧠💀 Before today: 👉 “API bana diya bro 😎” After today: 👉 “But… data kaha store ho raha hai?? kaun sambhal raha hai?? why is it not showing???” 😵💫 So I officially entered the chaos arena of: Flask + SQLAlchemy + PostgreSQL + DBeaver And honestly… It started with “this seems easy” and quickly became “why does my database hate me 😭” Here’s what went down👇 🔹 Met SQLAlchemy (ORM) — basically a translator between Python & SQL (but sometimes even the translator gets confused 🤡) 🔹 Connected Flask to PostgreSQL (wrote the DB URI multiple times… still double-checking like it’s an exam 👀) 🔹 Used DBeaver because I needed visual proof that my tables actually exist 😤 🔹 Created tables → suddenly felt like I’m building something real 🏗️ 🔹 Built APIs + performed CRUD operations → Create ✔️ → Read ✔️ → Update ✔️ → Delete ✔️ → Debug… still in progress 🐛😭 💡 Biggest realization today: Backend development is not just coding… It’s literally: 👉 convincing your API, your database, and your brain to agree at the same time 🤯 Also, that one moment… when your API finally hits the database and returns correct data? 🎮 Boss level cleared ✨ Instant happiness unlocked 📈 Slowly upgrading from: “I made a Flask app” to “I actually understand how data is stored & managed” Day 4 done ✅ Confidence +1 📈 Errors +10 🐛😂 Let’s see what Day 5 brings… Hopefully fewer bugs… but let’s be honest 😅 #LearningInPublic #BackendJourney #Flask #SQLAlchemy #PostgreSQL #DBeaver #100DaysOfCode #DeveloperLife
To view or add a comment, sign in
-
-
The common approach for background tasks in Django typically involves using Redis and Celery. However, it's important to remember that defaults are habits, not strict requirements. In a recent Django API project, a different solution was implemented by using Postgres as the task queue, utilizing a concurrency primitive known as SELECT FOR UPDATE SKIP LOCKED—something many developers overlook. This approach features: - A single table for the queue - Atomic job claims by workers - Built-in retries, scheduling, and concurrency control As a result, the docker-compose setup was simplified from four services to just two. Is this method suitable for every project? Not necessarily. However, for many Django applications focused on I/O-bound background tasks, it proves to be more than adequate. The entire journey was documented, detailing the reasons, methods, and scenarios where this approach may not be ideal. Read more here: https://lnkd.in/dyhwBQaU
To view or add a comment, sign in
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development