🚀 𝗤𝘂𝗶𝘇 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗕𝗮𝗰𝗸𝗲𝗻𝗱 𝗔𝗣𝗜 – 𝗕𝘂𝗶𝗹𝘁 𝘄𝗶𝘁𝗵 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 I recently built a backend system for a Quiz Application using modern Python backend technologies. 🔧 𝗧𝗲𝗰𝗵 𝗦𝘁𝗮𝗰𝗸: • FastAPI (High-performance API framework) • SQLAlchemy (ORM for database management) • PostgreSQL (Relational database) • Pydantic (Data validation & schema handling) 📌 𝗞𝗲𝘆 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀: • RESTful API endpoints for questions and choices • One-to-many relationship between Questions and Choices • Secure database session handling with dependency injection • Proper request validation using Pydantic models • Clean and scalable backend architecture 🔗 𝗔𝗣𝗜 𝗘𝗻𝗱𝗽𝗼𝗶𝗻𝘁𝘀: • GET /questions/{question_id} → Fetch a specific question • GET /choices/{question_id} → Fetch all choices for a question • POST /questions → Create a question with multiple choices 🧠 𝗪𝗵𝗮𝘁 𝗜 𝗟𝗲𝗮𝗿𝗻𝗲𝗱: • How FastAPI handles async backend development efficiently • Working with SQLAlchemy ORM for relational data modeling • Designing clean backend architecture with separation of concerns • Implementing database relationships and migrations logic 💻 𝗚𝗶𝘁𝗛𝘂𝗯 𝗥𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆: 👉 https://lnkd.in/dHJczetV This project helped me strengthen my understanding of backend development, API design, and database integration. #FastAPI #Python #BackendDevelopment #APIs #SQLAlchemy #PostgreSQL #SoftwareEngineering #LearningByBuilding
FastAPI Backend Development with PostgreSQL and SQLAlchemy
More Relevant Posts
-
𝗙𝗹𝗮𝘀𝗸 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀 𝗮𝗻𝗱 𝗢𝗥𝗠 Day 82 of 100 Days of Code. I learned how to add a database to Flask. Django has a built-in system. Flask does not. You choose your own tools. I used two main packages: - Flask-SQLAlchemy: This is the ORM. - Flask-Migrate: This handles database updates. How to build models: - Create a class. - Make it inherit from db.Model. - Add columns like db.Integer or db.String. Relationships work like this: - One-to-Many: Use a ForeignKey and db.relationship. - Many-to-Many: Use an association table. - One-to-One: Use uselist=False. The migration process: - flask db init: Setup the folder. - flask db migrate: Create the update script. - flask db upgrade: Apply changes to the database. Working with data: - Use db.session.add() to stage changes. - Use db.session.commit() to save them. - Use .query.all() to get all records. - Use .query.get_or_404() to find one record or show an error. Security tips: - Use werkzeug for password hashing. - Store secrets in a .env file. - Use python-dotenv to load those secrets. Flask requires more manual work than Django. The logic is the same. Source: https://lnkd.in/g7XubX6X
To view or add a comment, sign in
-
Built a personal project called 𝗥𝗲𝗲𝗹𝗩𝗮𝘂𝗹𝘁 over the past few weeks and wanted to share what went into it. 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗜 𝘄𝗮𝘀 𝘀𝗼𝗹𝘃𝗶𝗻𝗴: I watch a lot of content on Instagram and YouTube about AI tools, open source models, and dev resources. I kept losing track of things I wanted to revisit. 𝗖𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗮𝗻𝗱 𝗯𝗼𝗼𝗸𝗺𝗮𝗿𝗸𝘀 𝗱𝗼 𝗻𝗼𝘁 𝗰𝘂𝘁 𝗶𝘁. So I built a full-stack application where I can save any link, reel, or note and 𝘀𝗲𝗮𝗿𝗰𝗵 𝗶𝘁 𝗹𝗮𝘁𝗲𝗿 𝘂𝘀𝗶𝗻𝗴 𝗻𝗮𝘁𝘂𝗿𝗮𝗹 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲. Not keyword search. Meaning-based search. Tech used: Backend — 𝗝𝗮𝘃𝗮 𝟮𝟭 with Spring Boot 3.2, Spring Data JPA, REST APIs Database — 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟 with 𝗽𝗴𝘃𝗲𝗰𝘁𝗼𝗿 extension on Supabase Embeddings — 𝗛𝘂𝗴𝗴𝗶𝗻𝗴 𝗙𝗮𝗰𝗲 Inference API using sentence-transformers/all-MiniLM-L6-v2 to convert 𝘁𝗲𝘅𝘁 𝗶𝗻𝘁𝗼 𝟯𝟴𝟰-𝗱𝗶𝗺𝗲𝗻𝘀𝗶𝗼𝗻𝗮𝗹 𝘃𝗲𝗰𝘁𝗼𝗿𝘀 Search — Cosine similarity search using 𝗽𝗴𝘃𝗲𝗰𝘁𝗼𝗿'𝘀 𝗶𝘃𝗳𝗳𝗹𝗮𝘁 𝗶𝗻𝗱𝗲𝘅 𝗧𝗲𝗹𝗲𝗴𝗿𝗮𝗺 𝗕𝗼𝘁 — built into the Spring Boot service, lets me send a URL and get it saved automatically with metadata extracted via Jsoup Frontend — Vanilla HTML, CSS, JS 𝗱𝗲𝗽𝗹𝗼𝘆𝗲𝗱 𝗼𝗻 𝗩𝗲𝗿𝗰𝗲𝗹 Deployment — 𝗗𝗼𝗰𝗸𝗲𝗿𝗶𝘇𝗲𝗱 𝗦𝗽𝗿𝗶𝗻𝗴 𝗕𝗼𝗼𝘁 𝗮𝗽𝗽 𝗼𝗻 𝗥𝗲𝗻𝗱𝗲𝗿 What I learned from actually shipping it: Hugging Face free tier uses a different endpoint than documented. Had to debug a 404 mid-production. Render is IPv4 only so Supabase Direct Connection does not work. Transaction Pooler with stringtype=unspecified in the JDBC URL is the fix. pgvector requires data to exist before the ivfflat index is useful. This project gave me hands-on experience with vector embeddings, semantic search, RAG-adjacent architecture, and end-to-end deployment on free-tier infrastructure. 𝗚𝗶𝘁𝗛𝘂𝗯 𝗹𝗶𝗻𝗸 𝗶𝗻 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀. #Java #SpringBoot #SemanticSearch #VectorDatabase #pgvector #HuggingFace #BackendDevelopment #FullStackDevelopment #RAG #GenerativeAI #AIEngineering #PostgreSQL #Docker #SoftwareEngineering #OpenSource
To view or add a comment, sign in
-
🚀 Built Lightweight Async ORMs for FastAPI (Inspired by LoopBack) While working on FastAPI projects, I got an idea based on my previous experience with LoopBack — what if we could have a simpler ORM with: minimal boilerplate built-in relation loading and straightforward query syntax So I built two async ORMs: oceanic-mysql-orm — built on aiomysql oceanic-postgres-orm — built on asyncpg 💡 Key Features Async-first (no session management) Automatic relation loading (no N+1 issues) Auto-migrate (additive only — never drops columns) Simple dict-based query system SQL echo mode for debugging ⚡ Example users = await connector.find(User, { "where": {"status": "active"}, "include": ["posts"], "limit": 20 }) 🔥 PostgreSQL Extras Soft deletes (deleted_at handled automatically) Raw SQL support when needed Nested includes (orders.items.product) Advanced filters (ilike, regexp, between) ⚠️ Scope This is intentionally designed for simplicity: No complex JOIN builder No multi-database abstraction SQLAlchemy is still a great choice for large, complex systems. This is aimed at the 80% use case where you want to build and ship quickly. 📦 Installation pip install oceanic-mysql-orm pip install oceanic-postgres-orm 🚧 Status Both packages are v1 (early stage). They’re functional, but I’d really value feedback from developers working with FastAPI. 🔗 Full Guide Complete usage guide here: https://lnkd.in/dj5eY4aN
To view or add a comment, sign in
-
Part 1: Architecture & Real-World System Design Modern backend systems don’t break because of scale alone — they break due to complexity. In a recent redesign, the focus was on simplifying the handling of large, dynamic form data while improving performance, maintainability, and the developer experience. 📊 The shift: 🔹 From rigid column-based schema → flexible JSONB-based storage 🔹 From heavy raw SQL → clean ORM-driven queries 🔹 From scattered APIs → structured, minimal endpoints ⚙️ Architecture Improvements ✔️ Modular design using separate Django applications ✔️ Class-based views for reusable and maintainable logic ✔️ API structuring using Django Ninja Router ✔️ Reduced the number of APIs by consolidating responses ✔️ Strong alignment with frontend for payload and contract design 📦 Data Handling Strategy Instead of creating hundreds of columns for dynamic forms: → Stored complete form responses as JSON objects → Handled 300–500+ fields without schema changes → Simplified debugging with structured payloads → Enabled faster iteration without production risks 🔄 Processing Flow User Input → API Validation → Store JSON (status = 0) → Async Processing (Celery + Redis) → Update status = 1 → Dashboard reflects real-time updates 🚀 Outcome ✔️ Reduced schema complexity ✔️ Improved API performance ✔️ Avoided production issues caused by raw queries ✔️ Built a scalable and flexible backend system ✔️ Delivered smoother frontend-backend integration Security handled via JWT-based authentication with proper token flow. Still evolving with improvements in performance, validation, and system design. #BackendEngineering #Django #Python #SystemDesign #PostgreSQL #APIs #Celery #Redis #JWT
To view or add a comment, sign in
-
🚀 #PythonJourney | Day 151 — BREAKTHROUGH: API Fully Functional & First Successful Request Today marks a major milestone: **the URL Shortener API is LIVE and responding correctly!** After 8 days of building and debugging, I finally got the first successful POST request working. This breakthrough moment proves that all the pieces fit together. Key accomplishments: ✅ Fixed critical database type mismatch: • PostgreSQL was storing user_id as VARCHAR • SQLAlchemy was trying to query with UUID • Solution: Dropped volumes, rebuilt schema from scratch ✅ Fixed Pydantic response validation: • Model had clicks_total, database had total_clicks • Docker image was caching old code • Solution: Forced rebuild of container image ✅ First successful API call: • POST /api/v1/urls now returns proper JSON • Short code generated automatically • URL stored in database correctly • Full response validation passing ✅ Production-ready API endpoints confirmed: • Authentication working (API key validation) • Request validation (Pydantic models) • Database operations (CRUD) • Error handling (proper HTTP status codes) • Response serialization (JSON output) ✅ Lessons learned about debugging: • Always check the actual container logs • Volume management is critical in Docker • Type consistency across layers matters • Docker caching can hide recent changes • Patience and persistence beat quick fixes What happened today: → Identified the root cause through careful log analysis → Understood the full request/response cycle → Learned when to reset vs. when to patch → Experienced the joy of a working API! The API now successfully: - Validates user authentication - Creates shortened URLs with unique codes - Stores data in PostgreSQL - Returns properly formatted JSON responses - Handles errors gracefully This is what backend development is about: building reliable systems piece by piece, debugging methodically, and celebrating when it finally works. Status update: - ✅ Backend: FUNCTIONAL - ✅ Database: WORKING - ✅ API Endpoints: RESPONDING - ✅ Authentication: VERIFIED - ⏳ Full test suite: Next - ⏳ Deployment: Next week #Python #FastAPI #Backend #API #PostgreSQL #Docker #Debugging #SoftwareDevelopment #Victory #CodingJourney
To view or add a comment, sign in
-
-
# NeoSQLite is now 3x faster than MongoDB on the same hardware But the real story isn't just the numbers—it's how we got here. ## From Python Fallbacks to SQL-Native: A 12-Month Journey When we started building NeoSQLite, we took a "get it working first" approach. Complex aggregation operations like `$in`, `$nin`, `$elemMatch`, and `$project` were handled by Python fallbacks—meaning we'd fetch ALL documents from SQLite, then filter them in Python. It worked, but it was slow. Then we started dogfooding with **Neo-Bloggy** (our blogging platform that runs entirely on NeoSQLite instead of MongoDB). Production usage revealed the pain points real users would face. ## The SQL-Tier Revolution (v1.14.x series) Over the last 6 releases, we systematically moved operations from Python into native SQL: **v1.14.0** — Moved `$project` stage to SQL-tier (no more loading full documents just to project 2 fields) **v1.14.9-10** — Fixed `$elemMatch` and `$in`/`$nin` on array fields. Instead of returning 0 results or unfiltered documents, they now use proper SQL CTE patterns with `json_each()` **v1.14.11** — Added native regex operators (`$regexMatch`, `$regexFind`) directly in SQL tier using custom SQLite functions. Array operators got **10-100x speedup** with CTE patterns **v1.14.12** — Fixed the "malformed JSON" edge case (because even SQLite has its quirks with `json_each()` syntax!) ## The NX-27017 Milestone In v1.13.0, we shipped something unexpected—a **MongoDB Wire Protocol Server** that lets PyMongo connect directly to SQLite. No code changes needed. This isn't just an API clone; it's full wire protocol compatibility with 100% test parity against real MongoDB. ## What This Means - **3x faster** than MongoDB for typical operations - **30-300x faster** for index operations (SQLite's B-trees are fast) - **Zero network overhead** — embedded database, embedded performance - **Drop-in replacement** — existing PyMongo code works unchanged ## The Lesson Building a database isn't about getting the API right. It's about getting the execution model right. Every time we pushed logic from Python down to SQL, we got closer to SQLite's raw performance while maintaining MongoDB's developer experience. The 3x number isn't theoretical—it's measured against a real MongoDB instance in our CI pipeline, running 54 different operation categories across 10 iterations each. Want to try it? ```bash pip install neosqlite ``` Or check out the discussion: https://lnkd.in/gAdPAeCc
To view or add a comment, sign in
-
🚀 pytest-capquery 0.3 is live! This release was heavily focused on the Developer Experience (DX). We've officially introduced automated SQL snapshot testing, heavily inspired by the Jest framework. Instead of manually hardcoding and maintaining massive SQL strings in your Python tests, you can now seamlessly generate and validate physical .sql execution baselines with zero friction. To dive deeper into the "why," I've just published a new article breaking down the reality of database performance in production. The post covers: - 🚨 A painfully familiar SRE late-night "novel" - 🏢 The cultural divide between Developers and DBAs - 🛡️ Common architectural pitfalls (like the Python GC trap and the JOIN illusion) -💡 How pytest-capquery bridges the gap, complete with a Getting Started guide You can read the full breakdown here: https://lnkd.in/dJzBQ8nV If you care about engineering excellence, catching N+1 regressions in CI, and building robust backend systems, I invite you to check out the repository! Follow the project, drop a star, or open a PR. Together we can do more! 🤝 🔗 https://lnkd.in/d9EJgd8V #Python #SQLAlchemy #Pytest #SRE #EngineeringExcellence #OpenSource #DatabasePerformance
To view or add a comment, sign in
-
# New learning and Self imporvement Modern backend systems often don’t fail because of scale alone — they struggle due to complexity. In a recent architecture redesign, the focus was on simplifying how dynamic, large-scale form data is handled while improving performance, maintainability, and developer experience. The shift (as shown in the diagram): 🔹 Moved from rigid column-based schema → flexible JSONB-based storage 🔹 Replaced heavy raw SQL usage with clean Python ORM-driven data access 🔹 Introduced structured payload handling with clear state management (status-driven flow) ⚙️ Backend Architecture Improvements ✔️ Adopted a modular design using Django applications for better separation of concerns ✔️ Implemented class-based views for cleaner, reusable API logic ✔️ Structured API routing using Django Ninja Router for better organization and scalability ✔️ Reduced the number of APIs by consolidating responses into optimized, meaningful endpoints ✔️ Designed APIs in collaboration with frontend to ensure smooth data flow and minimal overhead 📦 Data Handling Strategy Instead of creating hundreds of columns for dynamic forms: → Stored complete form responses as JSON objects → Managed 300–500+ fields without schema changes → Simplified debugging with structured payload visibility → Enabled faster iteration without impacting production stability Processing Flow User Input → API Validation → Store JSON (status = 0) → Async Processing (Celery + Redis) → Update status = 1 → Dashboard Reflection 🚀 Outcome ✔️ Reduced system complexity significantly ✔️ Improved API performance and response clarity ✔️ Eliminated production risks caused by excessive raw queries ✔️ Created a scalable foundation for handling dynamic data ✔️ Delivered a smoother integration experience for frontend systems Security handled using JWT-based authentication with proper token flow. The system continues to evolve with ongoing improvements in validation, background processing, and performance tuning. #BackendEngineering #Django #Python #SystemDesign #PostgreSQL #APIs #Celery #Redis #JWT #SoftwareArchitecture
To view or add a comment, sign in
-
-
🚀 Understanding Database Migrations in FastAPI (with Alembic) When I first started working with FastAPI, one thing that felt missing compared to frameworks like Django was built-in database migrations. That’s where Alembic comes in—and honestly, it’s a game changer once you get the hang of it. Instead of manually running SQL queries or risking data loss while updating schemas, Alembic helps you version-control your database changes (think Git, but for your DB). 💡 Here’s a quick glimpse of how it works in a real setup: # models.py from sqlalchemy import Column, Integer, String from app.database import Base class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True) name = Column(String) Once your models are ready, you can generate a migration like this: alembic revision --autogenerate -m "create users table" And apply it with: alembic upgrade head That’s it - your database is now in sync without manually touching SQL. 🔍 What I like most about Alembic: Keeps track of schema versions Supports safe upgrades & rollbacks Works seamlessly with SQLAlchemy Makes team collaboration much easier ⚡ One key learning: FastAPI gives you flexibility, but with that comes responsibility—you choose your tools. Alembic fills that gap beautifully for database versioning. If you’re building production-grade apps with FastAPI and not using migrations yet, you’re definitely missing out. Curious - what’s your go-to migration tool in your stack? 👇 #FastAPI #Alembic #Python #BackendDevelopment #SoftwareEngineering #WebDevelopment #API #SQLAlchemy #Database #DatabaseMigrations #TechLearning #Developers #Coding #100DaysOfCode #DevCommunity #LearnInPublic
To view or add a comment, sign in
-
#PythonJourney | Day 147 — SQLAlchemy Models & API Endpoints Implementation Today was all about connecting the database layer with the API. This is where FastAPI meets SQLAlchemy and everything starts working together. Key accomplishments: ✅ Created comprehensive SQLAlchemy models: • User model (authentication & API keys) • URL model (main shortening logic) • Click model (event tracking) • ClickAggregate model (analytics summaries) • AuditLog model (compliance & debugging) ✅ Fixed PostgreSQL-specific imports: • JSONB type for flexible data storage • Proper index configuration • Relationship definitions with cascading deletes ✅ Implemented 8 API endpoints: • POST /api/v1/urls (create shortened URL) • GET /{short_code} (redirect to original) • GET /api/v1/urls (list user's URLs) • GET /api/v1/urls/{url_id} (get URL details) • GET /api/v1/urls/{url_id}/analytics (get analytics) • DELETE /api/v1/urls/{url_id} (delete URL) • GET /health (health check) ✅ Integrated database operations: • User authentication via API key • Permission checks (users can only see their own URLs) • Click tracking with geolocation & device detection • Soft deletes for data integrity • Audit logging for compliance ✅ Created test user creation script What I learned: → SQLAlchemy relationships make database operations elegant → Proper indexing strategy is crucial for performance → Cascade deletes prevent orphaned data → API key authentication is simpler than JWT for this use case → JSONB allows storing flexible analytics data in PostgreSQL The API is now fully functional with a real database. Next: write comprehensive tests and handle edge cases. #Python #FastAPI #SQLAlchemy #PostgreSQL #Backend #API #DatabaseDesign #SoftwareDevelopment
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Great work 👏