From APIs to Databases — My FastAPI Learning Journey As I continue my journey of mastering FastAPI, I’ve reached an exciting milestone: connecting my API to a database and working with real data using SQL queries. Until now, building endpoints and handling requests felt powerful — but integrating a database takes things to a whole new level. It transforms APIs from static responses into dynamic, data-driven systems. 🔍 What I’ve learned so far: - Connecting Python applications to a relational database - Writing SQL queries to retrieve and create posts - Structuring backend logic for clean and scalable APIs - Understanding how data flows between client → API → database 💡 One thing that stood out: «Writing SQL inside a FastAPI project gives you full control over your data — something every backend developer must master.» Here’s a simple example of how I’m retrieving and creating posts: from fastapi import FastAPI, Depends import psycopg2 app = FastAPI() conn = psycopg2.connect( host="localhost", database="fastapi_db", user="postgres", password="password" ) cursor = conn.cursor() @app.get("/posts") def get_posts(): cursor.execute("SELECT * FROM posts;") posts = cursor.fetchall() return {"data": posts} @app.post("/createpost") def create_post(title: str, content: str): cursor.execute( "INSERT INTO posts (title, content) VALUES (%s, %s) RETURNING *;", (title, content) ) new_post = cursor.fetchone() conn.commit() return {"data": new_post} ⚙️ This is just the beginning — next, I’m aiming to explore: - ORM tools like SQLAlchemy / SQLModel - Database migrations - Optimizing queries and performance Consistency and depth are key. Instead of jumping between technologies, I’m focusing on going deep into backend development with FastAPI. #FastAPI #BackendDevelopment #Python #SQL #APIs #LearningJourney #SoftwareDevelopment
Connecting FastAPI to a Database with SQL
More Relevant Posts
-
TerSQL v0.0.2 (beta) is live — and this is where things start getting serious. 🚀 What began as a better MySQL terminal is now evolving into something bigger: 👉 A SQL interface built for *humans*, not just developers. 🧠 The problem hasn’t changed: Databases are powerful — but interacting with them is still painful. • Beginners struggle with syntax • Developers waste time debugging queries • One mistake can still break things ⚡ What’s new in v0.0.2 (beta) This update focuses on **making databases more intuitive, not just more powerful**: ✨ Natural-language style queries → Type: *“show top 5 users”* → TerSQL auto-corrects to real SQL 🧩 Modular architecture → Clean pipeline: NLP → Core → Plugin Router → DB → Designed for extensibility across multiple databases 🌐 Multi-database support → MySQL · PostgreSQL · MongoDB 🛡️ Improved safety layer → Query validation + guardrails before execution 🎯 Interactive demo + full landing page → Visualise how queries transform and execute 🧠 What makes TerSQL different? This is NOT: ❌ Another database ❌ Another GUI client It’s an **interaction layer** on top of your existing database. No migration. No complexity. Just a better way to work with data. 🔮 Where this is going TerSQL is moving toward: → AI-assisted query generation → Query explanation (human-readable) → Smarter error correction → Developer + beginner unified experience 💡 Why I’m building this I don’t think databases should feel intimidating. If you can *think it*, you should be able to *query it*. 🌐 Try it out Live: https://lnkd.in/gxbpNz5j GitHub: https://lnkd.in/g2x5sSTp If you find it interesting, a ⭐ would mean a lot. 💬 I’d love your thoughts: Would you actually use natural language for querying databases? Or do you still prefer raw SQL? #opensource #ai #sql #python #developerexperience #devtools #databases #buildinpublic #systemdesign #machinelearning #backend #programming #techinnovation
To view or add a comment, sign in
-
-
I built a Text-to-SQL RAG system from scratch and it genuinely surprised me how much the retrieval step matters. The idea: type a plain English question, get back the right SQL query and the actual results. No schema memorisation, no manual query writing. Here's how it works under the hood: → Schema indexing (offline) I extract every table, column, data type, foreign key, and sample row from MySQL's INFORMATION_SCHEMA. Each table becomes a rich text document that gets embedded and stored in ChromaDB. → Query time (online) When you ask a question, it gets embedded with the same model, and cosine similarity retrieves the most relevant tables. Those schema docs go into a structured prompt alongside the question, and GPT-4o generates the SQL at temperature=0 (deterministic — crucial for SQL). → Two safety layers A keyword blocklist catches dangerous operations (DROP, DELETE, etc.) before execution. A read-only MySQL user enforces it at the database level — so even a prompt injection can't cause damage. Stack: Python · OpenAI GPT-4o · ChromaDB · MySQL · text-embedding-3-small Key insight I didn't expect: the quality of your schema document matters more than the LLM. A table description with column types + foreign keys + 3 sample rows retrieves dramatically better than just a list of column names. Full code on GitHub (link in comments). Happy to answer questions about the design. #MachineLearning #Python #SQL #RAG #LLM #DataEngineering #OpenAI #PortfolioProject
To view or add a comment, sign in
-
-
Do you know the difference between a static default and a dynamic callable in your ORM? It’s a small distinction in code that makes a massive difference in your database. 🚀 📍 Static Defaults These are defined once when the model is initialized. Every new record gets the exact same value. Use case: Setting a starting status (e.g., status='draft') or a counter starting at 0. 📍 Dynamic Defaults (Callables) These are calculated at the moment the record is created. By passing a function (like a lambda or a method), the ORM executes that logic for every single insert. Use case: Timestamps (datetime.now), UUIDs, or record-specific tokens. ⚠️ The Common Trap: One of the most frequent bugs is passing default=datetime.now() (with parentheses) instead of default=datetime.now. With (): The time is captured when the server starts. Every record will have the same timestamp until you restart the service! Without (): The ORM calls the function fresh for every new entry. Check out the infographic below for a side-by-side comparison using SQLAlchemy examples! #Python #ORM #SQLAlchemy #BackendDevelopment #CleanCode #SoftwareEngineering #Python #ORM #SQLAlchemy #Odoo #OdooDevelopment #BackendDevelopment #CleanCode #SoftwareEngineering #DatabaseDesign #ProgrammingTips #WebDevelopment #BackendEngineering #PythonDev #CodingBestPractices #ERP #FullStackDeveloper
To view or add a comment, sign in
-
-
🚀 Built Lightweight Async ORMs for FastAPI (Inspired by LoopBack) While working on FastAPI projects, I got an idea based on my previous experience with LoopBack — what if we could have a simpler ORM with: minimal boilerplate built-in relation loading and straightforward query syntax So I built two async ORMs: oceanic-mysql-orm — built on aiomysql oceanic-postgres-orm — built on asyncpg 💡 Key Features Async-first (no session management) Automatic relation loading (no N+1 issues) Auto-migrate (additive only — never drops columns) Simple dict-based query system SQL echo mode for debugging ⚡ Example users = await connector.find(User, { "where": {"status": "active"}, "include": ["posts"], "limit": 20 }) 🔥 PostgreSQL Extras Soft deletes (deleted_at handled automatically) Raw SQL support when needed Nested includes (orders.items.product) Advanced filters (ilike, regexp, between) ⚠️ Scope This is intentionally designed for simplicity: No complex JOIN builder No multi-database abstraction SQLAlchemy is still a great choice for large, complex systems. This is aimed at the 80% use case where you want to build and ship quickly. 📦 Installation pip install oceanic-mysql-orm pip install oceanic-postgres-orm 🚧 Status Both packages are v1 (early stage). They’re functional, but I’d really value feedback from developers working with FastAPI. 🔗 Full Guide Complete usage guide here: https://lnkd.in/dj5eY4aN
To view or add a comment, sign in
-
🚀 Understanding Database Migrations in FastAPI (with Alembic) When I first started working with FastAPI, one thing that felt missing compared to frameworks like Django was built-in database migrations. That’s where Alembic comes in—and honestly, it’s a game changer once you get the hang of it. Instead of manually running SQL queries or risking data loss while updating schemas, Alembic helps you version-control your database changes (think Git, but for your DB). 💡 Here’s a quick glimpse of how it works in a real setup: # models.py from sqlalchemy import Column, Integer, String from app.database import Base class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True) name = Column(String) Once your models are ready, you can generate a migration like this: alembic revision --autogenerate -m "create users table" And apply it with: alembic upgrade head That’s it - your database is now in sync without manually touching SQL. 🔍 What I like most about Alembic: Keeps track of schema versions Supports safe upgrades & rollbacks Works seamlessly with SQLAlchemy Makes team collaboration much easier ⚡ One key learning: FastAPI gives you flexibility, but with that comes responsibility—you choose your tools. Alembic fills that gap beautifully for database versioning. If you’re building production-grade apps with FastAPI and not using migrations yet, you’re definitely missing out. Curious - what’s your go-to migration tool in your stack? 👇 #FastAPI #Alembic #Python #BackendDevelopment #SoftwareEngineering #WebDevelopment #API #SQLAlchemy #Database #DatabaseMigrations #TechLearning #Developers #Coding #100DaysOfCode #DevCommunity #LearnInPublic
To view or add a comment, sign in
-
🚀 Built a MySQL MCP Server with Natural Language Querying I recently built a MySQL MCP (Model Context Protocol) server using Python that allows AI to interact with databases using plain English. 💡 What this means: You can ask questions like: 👉 "Show last 10 orders" 👉 "Get top customers by revenue" …and the system automatically converts it into SQL and fetches results. 🔧 Key Features: • Natural Language → SQL • Secure Read-only Query Mode • Schema Exploration (tables, columns) • Plug & Play with Claude Desktop • Configurable via .env • Fully tested end-to-end 🧠 Architecture: AI Client → MCP Server → MySQL Database ⚙️ Tech Stack: Python | MCP | MySQL | mysql-connector | dotenv | Claude Desktop 🔥 Why this matters: This bridges the gap between AI + Databases, making data accessible even for non-technical users. 📌 Next steps: Planning to extend this with: • Query optimization • Role-based access • Multi-database support 🔗 GitHub: https://lnkd.in/g7RgQrdd #MCP #Python #MySQL #AI #LLM #OpenSource #BuildInPublic #DatabaseAI
To view or add a comment, sign in
-
-
💾 𝗦𝗲𝗾𝘂𝗲𝗹𝗶𝘇𝗲 𝗔𝘀𝘀𝗼𝗰𝗶𝗮𝘁𝗶𝗼𝗻𝘀 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗱 If you're working with relational data in Node with 𝗦𝗲𝗾𝘂𝗲𝗹𝗶𝘇𝗲, understanding associations in 𝗦𝗲𝗾𝘂𝗲𝗹𝗶𝘇𝗲 is crucial. Let’s understand the 4 core relationship types between entities. ❶ 𝗢𝗻𝗲-𝘁𝗼-𝗢𝗻𝗲 (𝟭:𝟭) Each record in Table A is linked to exactly one record in Table B. User ↔ Profile User.hasOne(Profile); Profile.belongsTo(User); ❷ 𝗢𝗻𝗲-𝘁𝗼-𝗠𝗮𝗻𝘆 One record in Table A can have multiple records in Table B. User → Posts User.hasMany(Post); Post.belongsTo(User); 📌 Used when ♦️ Parent-child relationships exist ♦️ A single entity owns multiple dependent entities ❸ 𝗠𝗮𝗻𝘆-𝘁𝗼-𝗢𝗻𝗲 This is just the inverse of One-to-Many. Many Posts → One User Post.belongsTo(User); User.hasMany(Post); 💡 𝗧𝗵𝗲 𝗳𝗼𝗿𝗲𝗶𝗴𝗻 𝗸𝗲𝘆 𝗮𝗹𝘄𝗮𝘆𝘀 𝗹𝗶𝘃𝗲𝘀 𝗼𝗻 𝘁𝗵𝗲 "𝗺𝗮𝗻𝘆" 𝘀𝗶𝗱𝗲. ❹ 𝗠𝗮𝗻𝘆-𝘁𝗼-𝗠𝗮𝗻𝘆 Multiple records in Table A can relate to multiple records in Table B. Students ↔ Courses Student.belongsToMany(Course, { through: 'StudentCourses' }); Course.belongsToMany(Student, { through: 'StudentCourses' }); 📌 Use when - ♦️ Relationships are highly interconnected ♦️ Requires a junction table 👉 We’ll dive deeper into 𝗧𝗿𝗮𝗻𝘀𝗮𝗰𝘁𝗶𝗼𝗻𝘀 using 𝘀𝗲𝗾𝘂𝗲𝗹𝗶𝘇𝗲 in the upcoming posts. Stay tuned!! 🔔 Follow Nitin Kumar for daily valuable insights on LLD, HLD, Distributed Systems and AI. ♻️ Repost to help others in your network. #javascript #nodejs #sequelize #sql #mysql
To view or add a comment, sign in
-
-
Most data analysts on my team spent more time writing SQL than actually analysing data. So I built a fix — without touching our existing Superset setup. It's called a Text-to-SQL Sidecar: a standalone FastAPI microservice that sits alongside Apache Superset and turns plain English into validated, safe SQL. You ask: "which products had the highest return rate last quarter?" It generates, validates, and executes the SQL — then hands the results back. A few things I was deliberate about: → AST-level SQL validation (not string matching — trivially bypassable) → Per-database table allowlists so the LLM can only touch what it's supposed to → Schema caching so we're not hammering the DB on every request → LLM-agnostic design — swap the endpoint URL, change the model → Reasoning traces returned alongside SQL so analysts can actually trust the output Superset never needs to know it exists. It just receives SQL. I wrote up the full implementation — architecture, code walkthrough, and the design decisions that make it production-ready. Link in the comments 👇 #DataEngineering #AI #SQL #FastAPI #ApacheSuperset #LLM #Python
To view or add a comment, sign in
-
#PythonJourney | Day 147 — SQLAlchemy Models & API Endpoints Implementation Today was all about connecting the database layer with the API. This is where FastAPI meets SQLAlchemy and everything starts working together. Key accomplishments: ✅ Created comprehensive SQLAlchemy models: • User model (authentication & API keys) • URL model (main shortening logic) • Click model (event tracking) • ClickAggregate model (analytics summaries) • AuditLog model (compliance & debugging) ✅ Fixed PostgreSQL-specific imports: • JSONB type for flexible data storage • Proper index configuration • Relationship definitions with cascading deletes ✅ Implemented 8 API endpoints: • POST /api/v1/urls (create shortened URL) • GET /{short_code} (redirect to original) • GET /api/v1/urls (list user's URLs) • GET /api/v1/urls/{url_id} (get URL details) • GET /api/v1/urls/{url_id}/analytics (get analytics) • DELETE /api/v1/urls/{url_id} (delete URL) • GET /health (health check) ✅ Integrated database operations: • User authentication via API key • Permission checks (users can only see their own URLs) • Click tracking with geolocation & device detection • Soft deletes for data integrity • Audit logging for compliance ✅ Created test user creation script What I learned: → SQLAlchemy relationships make database operations elegant → Proper indexing strategy is crucial for performance → Cascade deletes prevent orphaned data → API key authentication is simpler than JWT for this use case → JSONB allows storing flexible analytics data in PostgreSQL The API is now fully functional with a real database. Next: write comprehensive tests and handle edge cases. #Python #FastAPI #SQLAlchemy #PostgreSQL #Backend #API #DatabaseDesign #SoftwareDevelopment
To view or add a comment, sign in
-
-
🚀 I just shipped something I'm genuinely proud of — Text2SQL Studio. The idea is simple: what if anyone — analyst, manager, founder — could query a database just by asking a question in plain English? No SQL expertise. No bottlenecks. Just answers. --- 🔍 Here's what makes it special: ✅ ReAct Agent Loop — the system *reasons* about your schema before writing a single line of SQL ✅ Self-Healing Queries — if a query fails, it doesn't just crash. It automatically passes the error to GPT-4, which diagnoses the issue and returns a corrected query. Zero manual intervention. ✅ Visual Schema Explorer — interactive graph to explore tables, columns & relationships in real-time ✅ Upload .sqlite or .csv files — system auto-converts them into relational schemas ✅ Streaming Responses — watch the agent *think* step by step, live ✅ Enterprise-grade security — read-only execution, Firebase Auth, MongoDB + GridFS persistence --- The self-healing part was honestly the hardest to build — and the most satisfying when it worked. When a query breaks, the error context gets fed back to GPT-4, which figures out *why* it broke and fixes it on the fly. Like having a senior engineer reviewing every query in real time. --- 🛠️ Stack: React 18 · FastAPI · Tailwind CSS · Firebase · MongoDB · Docker · Prometheus 🌐 Live here → https://lnkd.in/e3VkQr7B 📹 Demo video attached 👇 If you've ever wished your data could just *talk back to you* — this is it. Would love your feedback! Drop a comment or DM me 🙌 #AI #LLM #Text2SQL #NaturalLanguageProcessing #FullStackDevelopment #OpenAI #FastAPI #React #MachineLearning #SideProject #BuildInPublic #DataEngineering #Python #WebDevelopment
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development