🐛 #PythonJourney | Day 148 — Debugging SQLAlchemy Models & Type Compatibility Today was about learning through debugging. I encountered multiple SQLAlchemy type compatibility issues and learned valuable lessons about database design. Key accomplishments: ✅ Fixed critical SQLAlchemy issues: • JSONB and INET are PostgreSQL-specific types • Must import from sqlalchemy.dialects.postgresql • Resolved naming conflicts (metadata is reserved) ✅ Solved type mismatches: • User.id must be UUID(as_uuid=True) • URL.user_id must match User.id type exactly • Foreign key constraints require compatible types • PostgreSQL is strict about type casting ✅ Debugged relationship definitions: • back_populates must reference correct class names • Cascade deletes prevent orphaned data • Bidirectional relationships need proper naming ✅ Created test user script: • Generates database tables automatically • Creates sample user with API key • Tests database connectivity ✅ All 5 SQLAlchemy models are now production-ready: • User (authentication) • URL (shortened URLs) • Click (event tracking) • ClickAggregate (analytics summaries) • AuditLog (compliance) What I learned today: → Database type safety is critical → PostgreSQL has its own type system (JSONB, UUID, INET) → SQLAlchemy type imports matter - core vs dialect-specific → Debugging error messages contain the actual problem - read them carefully → Foreign key constraints are strict about type compatibility The lesson: Sometimes the best learning comes from fixing errors. Each error message was an opportunity to understand the framework better. #Python #SQLAlchemy #PostgreSQL #DatabaseDesign #Backend #Debugging #SoftwareDevelopment #TechLearning
Marcos Vinicius Thibes Kemer’s Post
More Relevant Posts
-
my FastAPI Journey — Database Integration with SQLAlchemy! Until now I was storing data in a Python list — which gets wiped every time the server restarts! 😅 Today I connected FastAPI to a real database using SQLAlchemy + SQLite — and data is now PERMANENT! 🎉 🔥 What I built today: A fully functional Notes API connected to a real database with complete CRUD operations! ✅ What I implemented: 📌 SQLite Database — Simple file based database (notes.db) that stores data permanently on disk 📌 SQLAlchemy ORM — Connects Python to database without writing any raw SQL! 📌 Database Models — Defined Note table with id, title, content, priority and tag columns 📌 Pydantic Schemas — Separate models for input validation and response formatting 📌 Dependency Injection — Used FastAPI's Depends() to manage database sessions cleanly 📌 Full CRUD API: → POST /notes — Create and save note to database → GET /notes — Fetch all notes from database → GET /notes/{id} — Fetch specific note by ID → DELETE /notes/{id} — Delete note from database 💡 Key concepts I learned: 🔹 SQLAlchemy Engine — Creates connection to the database file 🔹 SessionLocal — Manages database transactions safely 🔹 Base.metadata.create_all() — Auto creates tables, no migrations needed! 🔹 db.add() → db.commit() → db.refresh() — The 3 steps to save data 🔹 response_model — Controls exactly what data is returned to user 🔹 from_attributes = True — Allows Pydantic to read SQLAlchemy objects directly ⚡ Best part? Restarted the server → data was still there! 🎉 This is the magic of a real database vs a Python list! Compare with Django: → Django makemigrations → FastAPI create_all() automatic! → Django objects.create() → FastAPI db.add() + db.commit() → Django serializers → FastAPI Pydantic schemas Next up: 🔐 Module 5 — JWT Authentication & Security 🚀 Module 6 — Advanced FastAPI Features 🌍 Module 8 — Deploying to Production Learning in public — follow along for daily updates! 💪 #FastAPI #Python #SQLAlchemy #Database #SQLite #BackendDevelopment #Django #LearningInPublic #100DaysOfCode #WebDevelopment #API #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Just shipped my first open-source Python package pg-advisor v0.1.1 A rule-based PostgreSQL advisor that connects to your database and tells you exactly what's wrong with ready-to-run SQL fixes. 🔍 What it detects: • Missing primary keys & indexes on foreign keys • FLOAT used for money columns (precision errors waiting to happen) • Duplicate & unused indexes • Slow queries via pg_stat_statements • Missing created_at / updated_at timestamps • SELECT * usage and much more ✨ What makes it different: → No AI. No magic. Fully deterministic rule engine. → Scans live DB AND your model files (SQLAlchemy, Django ORM, plain SQL) → Generates a timestamped Markdown report automatically → Works in CI/CD pipelines out of the box 📦 Install in one line: pip install pg-advisor ⚡ Run it: pg-advisor analyze postgresql://user:pass@localhost/mydb Would love feedback from the community 🙏 🔗 PyPI: https://lnkd.in/dNAn4Ptv #Python #PostgreSQL #OpenSource #DevTools #Backend #Database #CLI
To view or add a comment, sign in
-
Recently, while setting up a Python-based auth service using FastAPI and PostgreSQL, I ran into an issue that many of us have probably faced but don’t always talk about. The application was failing with a database connection error, even though everything “looked” correct. The root cause turned out to be something simple but important — mixing Docker-based configuration with a local development setup. Using postgres as a hostname works perfectly inside Docker networks, but when running the app locally with uvicorn, the correct host should be localhost. Small detail, but it completely breaks the connection if overlooked. Another issue I encountered was with SQLAlchemy setup. My models were importing Base, but it wasn’t defined properly in the database module. This led to an import error during application startup. Fixing it required properly initializing declarative_base() and ensuring models were correctly registered. A couple of key takeaways from this experience: > Environment-specific configurations matter more than we think > Avoid hardcoding values — always rely on environment variables > Don’t connect to the database during module import > Ensure ORM base and models are structured cleanly What I appreciated most was how these small fixes significantly improved the overall architecture. Moving toward a cleaner separation of config, database, repositories, and services makes the system more scalable and production-ready. These are the kinds of practical issues that don’t always show up in tutorials but are very real in day-to-day development. If you’re working with FastAPI, SQLAlchemy, or setting up microservices, I’d be curious to know what common pitfalls you’ve run into. #Python #FastAPI #PostgreSQL #SQLAlchemy #BackendDevelopment #Microservices #SoftwareEngineering #Debugging #LearningJourney
To view or add a comment, sign in
-
An API endpoint was making 401 database queries. Every. Single. Request. Day 15 of 30 -- SQLAlchemy ORM Internals Phase 3 -- Backend and APIs Nobody noticed for weeks. The dashboard just felt slow. The fix: 4 lines of code. One selectinload. One joinedload. Down to 3 queries. 3.2 seconds to 31 milliseconds. The culprit was the N+1 problem. When you fetch 50 orders and access order.customer in a loop, SQLAlchemy fires a separate SELECT for each customer. 50 orders = 51 queries. Add items and products and you get 401 queries per page load. Today's Topic covers: Identity Map and Unit of Work -- the two patterns that explain 90% of SQLAlchemy behavior 4 session states -- Transient, Pending, Persistent, Detached and what each means The N+1 problem with a visual query trace -- 401 queries vs 3 6 loading strategies -- lazy, joinedload, selectinload, subqueryload, raiseload, noload When to use joinedload vs selectinload -- the to-one vs to-many rule Full annotated syntax -- model definition, eager loading chains, unit-of-work commit Real order dashboard -- 401 queries reduced to 3 with 4 lines of options() 5 mistakes including DetachedInstanceError and joinedload on to-many 5 best practices including lazy=raise as the production default Key insight: Every ORM hides SQL from you. The good engineers know exactly what SQL their ORM is generating -- and why. #Python #SQLAlchemy #ORM #BackendDevelopment #Database #100DaysOfCode #FastAPI #TechContent #BuildInPublic #TechIndia #SoftwareEngineering #LinkedInCreator #LearnPython #OpenToWork #DataEngineering #PythonTutorial
To view or add a comment, sign in
-
A few months ago, I barely knew what an ORM was. Today I'm designing relational databases from scratch and querying them with raw SQL like it's second nature. Here's what I've been building 👇 🛠️ The Project A full data modelling and SQL project built in Python — designing schemas, seeding realistic test data, and running analytical queries against a live PostgreSQL database. 📐 The Stack → SQLAlchemy ORM to define clean, Pythonic relational models → PostgreSQL as the database engine (running locally via Docker) → pgcli for a smoother terminal querying experience with syntax highlighting and autocomplete → Claude Code inside VS Code as my AI pair programmer 🗂️ The Schema I modelled four core entities and their relationships: • Users → with emails, names, and timestamps • Addresses → linked to users via foreign key, with a default flag • Products → with categories, pricing, stock, and unique SKUs • Orders → tying it all together The thing nobody tells you about data engineering: the modelling decisions you make early ripple through everything downstream. Get the foreign keys wrong and your joins become a nightmare. I learned that the hard way — which is honestly the best way. 💡 Key Takeaways → SQLAlchemy keeps your schema readable and maintainable without writing raw DDL → pgcli makes working in the terminal genuinely enjoyable → Thinking carefully about entity relationships before writing a single line of code saves you hours of refactoring later → Seeding realistic synthetic data early forces you to stress-test your schema assumptions 📍 What's Next Layering in complex analytical queries, exploring how this data model feeds into a broader pipeline, and eventually connecting it to a transformation layer with dbt. Always building. The fundamentals matter more than the frameworks. 🚀 #DataEngineering #SQL #Python #SQLAlchemy #PostgreSQL #LearningInPublic #BuildInPublic #MachineLearning #DataScience #CareerJourney
To view or add a comment, sign in
-
-
🚀 Built Lightweight Async ORMs for FastAPI (Inspired by LoopBack) While working on FastAPI projects, I got an idea based on my previous experience with LoopBack — what if we could have a simpler ORM with: minimal boilerplate built-in relation loading and straightforward query syntax So I built two async ORMs: oceanic-mysql-orm — built on aiomysql oceanic-postgres-orm — built on asyncpg 💡 Key Features Async-first (no session management) Automatic relation loading (no N+1 issues) Auto-migrate (additive only — never drops columns) Simple dict-based query system SQL echo mode for debugging ⚡ Example users = await connector.find(User, { "where": {"status": "active"}, "include": ["posts"], "limit": 20 }) 🔥 PostgreSQL Extras Soft deletes (deleted_at handled automatically) Raw SQL support when needed Nested includes (orders.items.product) Advanced filters (ilike, regexp, between) ⚠️ Scope This is intentionally designed for simplicity: No complex JOIN builder No multi-database abstraction SQLAlchemy is still a great choice for large, complex systems. This is aimed at the 80% use case where you want to build and ship quickly. 📦 Installation pip install oceanic-mysql-orm pip install oceanic-postgres-orm 🚧 Status Both packages are v1 (early stage). They’re functional, but I’d really value feedback from developers working with FastAPI. 🔗 Full Guide Complete usage guide here: https://lnkd.in/dj5eY4aN
To view or add a comment, sign in
-
⚡ Connection Pooling in FastAPI with PostgreSQL (Why it matters) When I started building APIs with FastAPI + PostgreSQL, I made a common mistake 👇 👉 Opening a new database connection for every request It worked… until traffic increased 😅 ❌ Problem: Too many open connections Slower response times Database overload 💡 Solution: Connection Pooling Instead of creating new connections every time, we reuse a pool of existing connections. ✅ Benefits: ✔ Faster API responses ✔ Better resource management ✔ Handles high traffic efficiently 🔧 Example (SQLAlchemy): from sqlalchemy import create_engine engine = create_engine( "postgresql://user:password@localhost/db", pool_size=10, max_overflow=20, pool_timeout=30 ) 💡 What I learned: If you're building production APIs with FastAPI, connection pooling is not optional — it's essential. 🚀 Next step: Combining this with async DB handling for even better performance #FastAPI #PostgreSQL #Backend #Python #APIs #WebDevelopment
To view or add a comment, sign in
-
-
🚀 Day 4 of “Trying to Become a Backend Developer Without Breaking My Laptop” Today’s episode: The Day I Finally Understood Where My Data Lives 🧠💀 Before today: 👉 “API bana diya bro 😎” After today: 👉 “But… data kaha store ho raha hai?? kaun sambhal raha hai?? why is it not showing???” 😵💫 So I officially entered the chaos arena of: Flask + SQLAlchemy + PostgreSQL + DBeaver And honestly… It started with “this seems easy” and quickly became “why does my database hate me 😭” Here’s what went down👇 🔹 Met SQLAlchemy (ORM) — basically a translator between Python & SQL (but sometimes even the translator gets confused 🤡) 🔹 Connected Flask to PostgreSQL (wrote the DB URI multiple times… still double-checking like it’s an exam 👀) 🔹 Used DBeaver because I needed visual proof that my tables actually exist 😤 🔹 Created tables → suddenly felt like I’m building something real 🏗️ 🔹 Built APIs + performed CRUD operations → Create ✔️ → Read ✔️ → Update ✔️ → Delete ✔️ → Debug… still in progress 🐛😭 💡 Biggest realization today: Backend development is not just coding… It’s literally: 👉 convincing your API, your database, and your brain to agree at the same time 🤯 Also, that one moment… when your API finally hits the database and returns correct data? 🎮 Boss level cleared ✨ Instant happiness unlocked 📈 Slowly upgrading from: “I made a Flask app” to “I actually understand how data is stored & managed” Day 4 done ✅ Confidence +1 📈 Errors +10 🐛😂 Let’s see what Day 5 brings… Hopefully fewer bugs… but let’s be honest 😅 #LearningInPublic #BackendJourney #Flask #SQLAlchemy #PostgreSQL #DBeaver #100DaysOfCode #DeveloperLife
To view or add a comment, sign in
-
-
Our Prisma backend started throwing `P2037 — Too many database connections`. `pg_stat_activity` showed 96 idle sessions and 1 active query. The DB was holding connections nobody was using. Our stack: a Next.js/Prisma app (boring, correct, `connection_limit=5`) and a Python service running FastAPI + Celery prefork workers on the same Postgres. Railway plan caps `max_connections=100`. We had four slots of headroom — and then we didn't. Three mistakes were compounding on the Python side, and each one multiplied the next: 1. Two SQLAlchemy engines for one database. `db_service.py` created one. `prompt_loader.py`, added months later for LLM prompt storage, created a second pointing at the same URL. Two engines = two independent pools. Every LLM call touched both. 2. Default pool settings. Neither engine set `pool_size` or `max_overflow`. SQLAlchemy defaults to `pool_size=5, max_overflow=10`. That's 15 per engine per process. Two engines × 15 = 30 per Python process. 3. Fork without `engine.dispose()`. Celery prefork imports your code in the master (engines created at import), then forks N children. Children inherit the master's open sockets. Postgres sees them as live sessions. SQLAlchemy in the child doesn't know they were inherited, so on first query it opens a fresh one — leaving the inherited socket as an idle zombie. 4 workers × 30 connections × fork leak ≈ we blew past `max_connections=100` in minutes. The fix was three-part: a single engine module for the whole service, explicit `pool_size=3, max_overflow=2`, and a `worker_process_init` signal handler that calls `engine.dispose()` in every forked child so each starts clean. The deeper takeaway: fork-safety isn't a Celery-specific concern. Anything that creates pooled network resources at import time — ORMs, Redis clients, gRPC channels — needs a post-fork reset if you're running under preforking servers like Celery, Gunicorn, or uWSGI. Import-time globals are a landmine the moment a `fork()` is involved. Full write-up + code: https://lnkd.in/gKKwgfVN #engineering #backend #postgres #python #softwareengineering
To view or add a comment, sign in
-
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development