#PythonJourney | Day 147 — SQLAlchemy Models & API Endpoints Implementation Today was all about connecting the database layer with the API. This is where FastAPI meets SQLAlchemy and everything starts working together. Key accomplishments: ✅ Created comprehensive SQLAlchemy models: • User model (authentication & API keys) • URL model (main shortening logic) • Click model (event tracking) • ClickAggregate model (analytics summaries) • AuditLog model (compliance & debugging) ✅ Fixed PostgreSQL-specific imports: • JSONB type for flexible data storage • Proper index configuration • Relationship definitions with cascading deletes ✅ Implemented 8 API endpoints: • POST /api/v1/urls (create shortened URL) • GET /{short_code} (redirect to original) • GET /api/v1/urls (list user's URLs) • GET /api/v1/urls/{url_id} (get URL details) • GET /api/v1/urls/{url_id}/analytics (get analytics) • DELETE /api/v1/urls/{url_id} (delete URL) • GET /health (health check) ✅ Integrated database operations: • User authentication via API key • Permission checks (users can only see their own URLs) • Click tracking with geolocation & device detection • Soft deletes for data integrity • Audit logging for compliance ✅ Created test user creation script What I learned: → SQLAlchemy relationships make database operations elegant → Proper indexing strategy is crucial for performance → Cascade deletes prevent orphaned data → API key authentication is simpler than JWT for this use case → JSONB allows storing flexible analytics data in PostgreSQL The API is now fully functional with a real database. Next: write comprehensive tests and handle edge cases. #Python #FastAPI #SQLAlchemy #PostgreSQL #Backend #API #DatabaseDesign #SoftwareDevelopment
Marcos Vinicius Thibes Kemer’s Post
More Relevant Posts
-
my FastAPI Journey — Database Integration with SQLAlchemy! Until now I was storing data in a Python list — which gets wiped every time the server restarts! 😅 Today I connected FastAPI to a real database using SQLAlchemy + SQLite — and data is now PERMANENT! 🎉 🔥 What I built today: A fully functional Notes API connected to a real database with complete CRUD operations! ✅ What I implemented: 📌 SQLite Database — Simple file based database (notes.db) that stores data permanently on disk 📌 SQLAlchemy ORM — Connects Python to database without writing any raw SQL! 📌 Database Models — Defined Note table with id, title, content, priority and tag columns 📌 Pydantic Schemas — Separate models for input validation and response formatting 📌 Dependency Injection — Used FastAPI's Depends() to manage database sessions cleanly 📌 Full CRUD API: → POST /notes — Create and save note to database → GET /notes — Fetch all notes from database → GET /notes/{id} — Fetch specific note by ID → DELETE /notes/{id} — Delete note from database 💡 Key concepts I learned: 🔹 SQLAlchemy Engine — Creates connection to the database file 🔹 SessionLocal — Manages database transactions safely 🔹 Base.metadata.create_all() — Auto creates tables, no migrations needed! 🔹 db.add() → db.commit() → db.refresh() — The 3 steps to save data 🔹 response_model — Controls exactly what data is returned to user 🔹 from_attributes = True — Allows Pydantic to read SQLAlchemy objects directly ⚡ Best part? Restarted the server → data was still there! 🎉 This is the magic of a real database vs a Python list! Compare with Django: → Django makemigrations → FastAPI create_all() automatic! → Django objects.create() → FastAPI db.add() + db.commit() → Django serializers → FastAPI Pydantic schemas Next up: 🔐 Module 5 — JWT Authentication & Security 🚀 Module 6 — Advanced FastAPI Features 🌍 Module 8 — Deploying to Production Learning in public — follow along for daily updates! 💪 #FastAPI #Python #SQLAlchemy #Database #SQLite #BackendDevelopment #Django #LearningInPublic #100DaysOfCode #WebDevelopment #API #SoftwareEngineering
To view or add a comment, sign in
-
🐛 #PythonJourney | Day 148 — Debugging SQLAlchemy Models & Type Compatibility Today was about learning through debugging. I encountered multiple SQLAlchemy type compatibility issues and learned valuable lessons about database design. Key accomplishments: ✅ Fixed critical SQLAlchemy issues: • JSONB and INET are PostgreSQL-specific types • Must import from sqlalchemy.dialects.postgresql • Resolved naming conflicts (metadata is reserved) ✅ Solved type mismatches: • User.id must be UUID(as_uuid=True) • URL.user_id must match User.id type exactly • Foreign key constraints require compatible types • PostgreSQL is strict about type casting ✅ Debugged relationship definitions: • back_populates must reference correct class names • Cascade deletes prevent orphaned data • Bidirectional relationships need proper naming ✅ Created test user script: • Generates database tables automatically • Creates sample user with API key • Tests database connectivity ✅ All 5 SQLAlchemy models are now production-ready: • User (authentication) • URL (shortened URLs) • Click (event tracking) • ClickAggregate (analytics summaries) • AuditLog (compliance) What I learned today: → Database type safety is critical → PostgreSQL has its own type system (JSONB, UUID, INET) → SQLAlchemy type imports matter - core vs dialect-specific → Debugging error messages contain the actual problem - read them carefully → Foreign key constraints are strict about type compatibility The lesson: Sometimes the best learning comes from fixing errors. Each error message was an opportunity to understand the framework better. #Python #SQLAlchemy #PostgreSQL #DatabaseDesign #Backend #Debugging #SoftwareDevelopment #TechLearning
To view or add a comment, sign in
-
-
Your green CI pipeline is lying to you. ✅ It tells you the code works, but it’s quietly hiding the N+1 database disaster that is going to bring down your production environment next week. For Python & SQLAlchemy developers: We spend hours writing tests to assert our application’s final state, but we treat the database layer like a complete black box. We test what the application does, but completely ignore how it does it. The Business Cost: This illusion of abstraction is expensive. Every inefficient query and silent lazy-load that slips into the main branch directly inflates your cloud infrastructure bill, degrades the user experience, and burns engineering hours in urgent firefighting. The Fix: We need to shift database performance testing to the left. That is why I built and open-sourced pytest-capquery. This plugin treats SQL queries as first-class citizens in your Pytest suite. By intercepting the SQLAlchemy engine at the driver level (fully supporting async drivers like asyncpg), it enforces a strict, chronological timeline of your execution footprint. Instead of just checking if a function returns True, pytest-capquery allows your CI to rigorously assert: - Deterministic I/O: Lock down the exact number of queries required. (N+1 regressions instantly fail the build). - Absolute Execution Order: Ensure queries and transactions (BEGIN, COMMIT, ROLLBACK) fire exactly when expected. - Strict Parameter Normalization: Validate the exact tuple bindings sent to the database. I invite the Python, backend, and systems architecture community to drop this into your test suites. Lock down your database performance, drastically increase your software resilience, and let's stop merging performance regressions. Try to break it, test your limits, and let’s discuss architecture and improvements in the comments or on GitHub! 👇 Check it out here: - 🔗 Repository: https://lnkd.in/d9EJgd8V - 📦 Install: pip install pytest-capquery #Python #SoftwareEngineering #Backend #SystemsArchitecture #Pytest #SQLAlchemy #OpenSource #TechLeadership
To view or add a comment, sign in
-
🚀 Just shipped my first open-source Python package pg-advisor v0.1.1 A rule-based PostgreSQL advisor that connects to your database and tells you exactly what's wrong with ready-to-run SQL fixes. 🔍 What it detects: • Missing primary keys & indexes on foreign keys • FLOAT used for money columns (precision errors waiting to happen) • Duplicate & unused indexes • Slow queries via pg_stat_statements • Missing created_at / updated_at timestamps • SELECT * usage and much more ✨ What makes it different: → No AI. No magic. Fully deterministic rule engine. → Scans live DB AND your model files (SQLAlchemy, Django ORM, plain SQL) → Generates a timestamped Markdown report automatically → Works in CI/CD pipelines out of the box 📦 Install in one line: pip install pg-advisor ⚡ Run it: pg-advisor analyze postgresql://user:pass@localhost/mydb Would love feedback from the community 🙏 🔗 PyPI: https://lnkd.in/dNAn4Ptv #Python #PostgreSQL #OpenSource #DevTools #Backend #Database #CLI
To view or add a comment, sign in
-
An API endpoint was making 401 database queries. Every. Single. Request. Day 15 of 30 -- SQLAlchemy ORM Internals Phase 3 -- Backend and APIs Nobody noticed for weeks. The dashboard just felt slow. The fix: 4 lines of code. One selectinload. One joinedload. Down to 3 queries. 3.2 seconds to 31 milliseconds. The culprit was the N+1 problem. When you fetch 50 orders and access order.customer in a loop, SQLAlchemy fires a separate SELECT for each customer. 50 orders = 51 queries. Add items and products and you get 401 queries per page load. Today's Topic covers: Identity Map and Unit of Work -- the two patterns that explain 90% of SQLAlchemy behavior 4 session states -- Transient, Pending, Persistent, Detached and what each means The N+1 problem with a visual query trace -- 401 queries vs 3 6 loading strategies -- lazy, joinedload, selectinload, subqueryload, raiseload, noload When to use joinedload vs selectinload -- the to-one vs to-many rule Full annotated syntax -- model definition, eager loading chains, unit-of-work commit Real order dashboard -- 401 queries reduced to 3 with 4 lines of options() 5 mistakes including DetachedInstanceError and joinedload on to-many 5 best practices including lazy=raise as the production default Key insight: Every ORM hides SQL from you. The good engineers know exactly what SQL their ORM is generating -- and why. #Python #SQLAlchemy #ORM #BackendDevelopment #Database #100DaysOfCode #FastAPI #TechContent #BuildInPublic #TechIndia #SoftwareEngineering #LinkedInCreator #LearnPython #OpenToWork #DataEngineering #PythonTutorial
To view or add a comment, sign in
-
🚀 pytest-capquery 0.3 is live! This release was heavily focused on the Developer Experience (DX). We've officially introduced automated SQL snapshot testing, heavily inspired by the Jest framework. Instead of manually hardcoding and maintaining massive SQL strings in your Python tests, you can now seamlessly generate and validate physical .sql execution baselines with zero friction. To dive deeper into the "why," I've just published a new article breaking down the reality of database performance in production. The post covers: - 🚨 A painfully familiar SRE late-night "novel" - 🏢 The cultural divide between Developers and DBAs - 🛡️ Common architectural pitfalls (like the Python GC trap and the JOIN illusion) -💡 How pytest-capquery bridges the gap, complete with a Getting Started guide You can read the full breakdown here: https://lnkd.in/dJzBQ8nV If you care about engineering excellence, catching N+1 regressions in CI, and building robust backend systems, I invite you to check out the repository! Follow the project, drop a star, or open a PR. Together we can do more! 🤝 🔗 https://lnkd.in/d9EJgd8V #Python #SQLAlchemy #Pytest #SRE #EngineeringExcellence #OpenSource #DatabasePerformance
To view or add a comment, sign in
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
-
Using LINQ helps you to replace loops and conditional logic with concise, readable, declarative expressions in your code. LINQ now can easy query Elasticsearch
To view or add a comment, sign in
-
🚀 Built Lightweight Async ORMs for FastAPI (Inspired by LoopBack) While working on FastAPI projects, I got an idea based on my previous experience with LoopBack — what if we could have a simpler ORM with: minimal boilerplate built-in relation loading and straightforward query syntax So I built two async ORMs: oceanic-mysql-orm — built on aiomysql oceanic-postgres-orm — built on asyncpg 💡 Key Features Async-first (no session management) Automatic relation loading (no N+1 issues) Auto-migrate (additive only — never drops columns) Simple dict-based query system SQL echo mode for debugging ⚡ Example users = await connector.find(User, { "where": {"status": "active"}, "include": ["posts"], "limit": 20 }) 🔥 PostgreSQL Extras Soft deletes (deleted_at handled automatically) Raw SQL support when needed Nested includes (orders.items.product) Advanced filters (ilike, regexp, between) ⚠️ Scope This is intentionally designed for simplicity: No complex JOIN builder No multi-database abstraction SQLAlchemy is still a great choice for large, complex systems. This is aimed at the 80% use case where you want to build and ship quickly. 📦 Installation pip install oceanic-mysql-orm pip install oceanic-postgres-orm 🚧 Status Both packages are v1 (early stage). They’re functional, but I’d really value feedback from developers working with FastAPI. 🔗 Full Guide Complete usage guide here: https://lnkd.in/dj5eY4aN
To view or add a comment, sign in
-
🚀 𝗤𝘂𝗶𝘇 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗕𝗮𝗰𝗸𝗲𝗻𝗱 𝗔𝗣𝗜 – 𝗕𝘂𝗶𝗹𝘁 𝘄𝗶𝘁𝗵 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 I recently built a backend system for a Quiz Application using modern Python backend technologies. 🔧 𝗧𝗲𝗰𝗵 𝗦𝘁𝗮𝗰𝗸: • FastAPI (High-performance API framework) • SQLAlchemy (ORM for database management) • PostgreSQL (Relational database) • Pydantic (Data validation & schema handling) 📌 𝗞𝗲𝘆 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀: • RESTful API endpoints for questions and choices • One-to-many relationship between Questions and Choices • Secure database session handling with dependency injection • Proper request validation using Pydantic models • Clean and scalable backend architecture 🔗 𝗔𝗣𝗜 𝗘𝗻𝗱𝗽𝗼𝗶𝗻𝘁𝘀: • GET /questions/{question_id} → Fetch a specific question • GET /choices/{question_id} → Fetch all choices for a question • POST /questions → Create a question with multiple choices 🧠 𝗪𝗵𝗮𝘁 𝗜 𝗟𝗲𝗮𝗿𝗻𝗲𝗱: • How FastAPI handles async backend development efficiently • Working with SQLAlchemy ORM for relational data modeling • Designing clean backend architecture with separation of concerns • Implementing database relationships and migrations logic 💻 𝗚𝗶𝘁𝗛𝘂𝗯 𝗥𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆: 👉 https://lnkd.in/dHJczetV This project helped me strengthen my understanding of backend development, API design, and database integration. #FastAPI #Python #BackendDevelopment #APIs #SQLAlchemy #PostgreSQL #SoftwareEngineering #LearningByBuilding
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development