Day 3: SQLite, SQLAlchemy & Alembic CRUD needs some memory, and that's what day 3 is about. We swap out the in-memory Python list for a real SQLite database — adding SQLAlchemy as the ORM and Alembic to manage schema changes over time. If you've used Eloquent + Laravel migrations, the mental model is nearly identical. Here's what we cover: → Setting up SQLAlchemy's three core pieces: engine, session factory, and declarative base → Defining ORM models with a one-to-many relationship (tasks → notes) and cascade deletes → Wiring Alembic so autogenerate actually works (the __init__.py trick that trips everyone up) → The session lifecycle: why db.commit() is the single most common mistake → Running a real schema migration — adding a column to a live database without touching the file directly The part worth highlighting: the service/route separation from Day 2 paid off immediately. The routes layer barely changed. The database swap was almost entirely contained to the service layer. That's the point of keeping them separate. By the end, you stop the server, restart it, and your data is still there. 👉🏼 Read the full article - link is in the comments. #Python #Starlette #SQLAlchemy #WebDevelopment #BackendDevelopment
SQLite, SQLAlchemy & Alembic: Database Setup for Python
More Relevant Posts
-
𝐖𝐡𝐚𝐭 𝐢𝐟 𝐲𝐨𝐮 𝐧𝐞𝐯𝐞𝐫 𝐡𝐚𝐝 𝐭𝐨 𝐰𝐫𝐢𝐭𝐞 𝐫𝐚𝐰 𝐒𝐐𝐋 𝐚𝐠𝐚𝐢𝐧? That's Django's ORM (Object-Relational Mapper) in a nutshell. An ORM lets you interact with your database using Python instead of SQL. You define your data as Python classes (called models), and Django handles the database queries behind the scenes. Here's a simple example: Instead of writing: SELECT * FROM blog_post WHERE author_id = 1; You write: Post.objects.filter(author_id=1) Same result. Pure Python. No SQL dialect to worry about. What makes the Django ORM powerful: → Works with PostgreSQL, MySQL, SQLite, Oracle. Same code, different DB → Migrations: Change your model, run a command, database updates automatically → Querysets that are chainable, lazy, and incredibly readable → Relationships like ForeignKey, ManyToMany, OneToOne, all handled elegantly I've worked with Oracle 19c and PostgreSQL in production. The ORM made switching between them painless which required just a config change. The caveat? For very complex queries, raw SQL is still available. The ORM doesn't replace SQL knowledge. It reduces how often you need it. If you're building data-heavy apps, learning the Django ORM deeply is one of the best investments you can make. Tomorrow: Django REST Framework, turning Django into an API powerhouse. #Django #ORM #Python #BackendDevelopment #Database
To view or add a comment, sign in
-
-
SQLAlchemy 2.0 made querying simpler — but also more explicit. When working with Async SQLAlchemy in FastAPI, one important thing to understand is: 👉 Query building is synchronous 👉 Execution is asynchronous Most confusion comes from mixing these two. In this post, I’ve focused on the core query building patterns you’ll use daily: ✔️ select() — building the base query ✔️ where() — filtering data ✔️ join() — working with relationships And then executing them using: 👉 await session.execute() No unnecessary theory — just practical patterns that map closely to SQL. 📌 This is Part 1 of a series: Part 2 → Execution Layer Part 3 → Insert, update, delete + transactions If you're using FastAPI with PostgreSQL, this will make your ORM usage much clearer. 💬 Do you prefer explicit queries style like this, or more ORM abstraction like in the previous version? #sqlalchemy #fastapi #postgresql #python #backenddevelopment
To view or add a comment, sign in
-
Day 66 of #90DaysOfCode Today I built a Flask web application to manage a personal book library using SQLite and SQLAlchemy. This is my first project where I integrated a database instead of relying on file-based storage, which made the application more structured and scalable. The app allows users to add books with title, author, and rating, and stores them in a database that is dynamically rendered on the homepage. Key features implemented • SQLite database integration • SQLAlchemy ORM for data modeling • Dynamic data retrieval and rendering • Form handling with validation • Backend data persistence Key concepts learned • How databases integrate with backend applications • Using ORM instead of manual file handling • Structuring models using SQLAlchemy • Managing and querying persistent data This project gave me a much clearer understanding of how real-world applications manage data. GitHub Repository https://lnkd.in/gTAqTzFW #Python #Flask #SQLAlchemy #BackendDevelopment #WebDevelopment #90DaysOfCode
To view or add a comment, sign in
-
📅 Day 14/30 — BookStore-Management-API(FastAPI + MySQL) 🔹 Project Overview: Built a complete BookStore-Management-API using Python (FastAPI) and MySQL. The system enables users and sellers to interact through a role-based structure, allowing book management, purchasing, and real-time business insights via an admin dashboard. 🔹 Tools Used: Python (FastAPI) | MySQL | SQLAlchemy 🔹 Key Features: • Role-based user system (User & Seller) 👥 • Book management system (add, update stock) 📚 • Purchase system with stock validation 🛒 • Admin dashboard with business metrics 📊 • Revenue tracking (total & daily) 💰 • Clean API design with form-based inputs ⚡ 🔹 What I Learned: • Designing REST APIs using FastAPI • Connecting Python with MySQL using SQLAlchemy • Implementing role-based logic in backend systems • Handling real-world scenarios like stock & transactions • Building admin analytics for business insights 🔗 GitHub Repository: https://lnkd.in/d4XAJZ2W #DataAnalytics #FastAPI #PythonProjects #SQL #BackendDevelopment #30DaysOfCoding 🚀
To view or add a comment, sign in
-
PydanTable 1.17.0 has been released, and MongoDB is now officially part of the story. This release introduces an optional MongoDB execution engine, allowing work to remain on the MongoDB database side when supported. This means you can materialize data only when it is actually needed in the application, rather than pulling full result sets into Python first. Additionally, this version adds integration with Beanie, a popular Python ODM (object-document mapper) for MongoDB built on Pydantic. If your application already models MongoDB documents with Beanie, PydanTable can seamlessly integrate with that layer, ensuring your document models and typed, table-shaped workflow remain aligned without the need for a parallel schema. For more details, check out the documentation and release notes: - PyPI: https://lnkd.in/ez4NZMjT - Documentation: https://lnkd.in/eV4RTqZQ - Repository: https://lnkd.in/eVpjrcRX #Python #Pydantic #MongoDB #DataEngineering #OpenSource
To view or add a comment, sign in
-
One Python expression, 22+ SQL dialects, zero rewrites 🐍 Running queries across multiple databases often means rewriting the same logic for each backend's SQL dialect. A query that works in DuckDB may require syntax changes for PostgreSQL, and another rewrite for BigQuery. Ibis removes that friction by compiling Python expressions into each backend's native SQL. Swap the connection, and the same code runs across 22+ databases. Key features: • Write once, run on DuckDB, PostgreSQL, BigQuery, Snowflake, and 18+ more • Lazy execution that builds and optimizes the query plan before sending it to the database • Intuitive chaining syntax similar to Polars 🚀 Article comparing Ibis with other libraries: https://bit.ly/3MnsHs7 #Python #DataScience #SQL
To view or add a comment, sign in
-
-
I just published a piece on something I keep seeing in Python APIs: using SQLAlchemy by default — even when it’s not needed After working more directly with PostgreSQL, I started questioning this habit. Because the database is not just storage — it’s a core part of performance and system behavior. In many APIs, especially simple or performance-critical ones, I’ve found that: - ORM adds unnecessary abstraction - raw SQL gives better control over query shape - PostgreSQL features are easier to leverage directly and in some cases, it actually improves performance due to lower overhead So I wrote about: ->> when ORM makes sense ->> when it becomes overengineering ->> and why I prefer asyncpg + raw SQL in many cases Do you stick with ORM everywhere, or go raw SQL when performance matters? https://lnkd.in/dzZ7xvCS #python #postgresql #fastapi #backend #softwareengineering
To view or add a comment, sign in
-
I had 18,115 AWS API operation names in PascalCase that needed to become kebab-case. DescribeInstances to describe-instances. PutBucketAcl to put-bucket-acl. AWS's acronym casing is inconsistent across services, and I was not writing a custom Python converter for 18,000 edge cases. DuckDB has a community extension for this: INSTALL inflector FROM community; LOAD inflector; SELECT inflector_to_kebab_case('DescribeInstances'); -- describe-instances All 18,115 operations in one SQL pass. It also does snake_case, camelCase, train-case, pluralization, and bulk column renaming on structs. I used it to keep the raw PascalCase botocore contract in parquet and transform at query time — no slow Python string manipulation. https://lnkd.in/e8a_Aitd #duckdb #dataengineering #platformengineering #aws
To view or add a comment, sign in
-
🚀 Excited to announce the launch of **pandasv2** – a production-ready Python library solving critical pain points when using pandas DataFrames in web applications! ## The Problem Working with pandas in web frameworks creates three headaches: - ❌ JSON serialization fails with NumPy types (int64, float64) - ❌ Silent data loss when converting to JSON (dates become timestamps, precision lost) - ❌ Manual conversion code scattered across your codebase ## The Solution **pandasv2** provides: ✅ **One-line JSON serialization** – Automatic handling of NumPy/pandas types (int64, float64, NaT, NaN) ✅ **Zero-config framework integration** – FastAPI, Flask, Django support out of the box ✅ **Type-safe round-trip conversion** – Serialize and deserialize with 100% fidelity ✅ **3-5x performance boost** – Faster than manual conversion methods ✅ **Production-ready** – Full test coverage, comprehensive documentation, MIT licensed ## Real-World Example ```python from fastapi import FastAPI import pandas as pd import pandasv2 app = FastAPI() @app.get("/data") def get_data(): df = pd.read_csv("data.csv") return pandasv2.FastAPIResponse(df) # ✅ Just works! ``` **Now available on PyPI:** pip install pandasv2 🔗 GitHub: https://lnkd.in/dfDMF8Gx 📚 Docs: https://lnkd.in/dqsPkAVa If you've struggled with pandas + JSON serialization, give pandasv2 a try and let me know what you think! #Python #DataScience #WebDevelopment #OpenSource #FastAPI #Pandas #JSON
To view or add a comment, sign in
-
I'm tried Mempalace, the memory system launched by Milla Jovovich, yes you read right, Milla the Fifth Element, Milla the Resident Evil hero. The model mimics the way human mind stores information, by generalizing, characterizing and associating concepts, creating taxonomies on the fly and represent them as graphs. Not a new Idea at all but, in the case of Milla's project implemented is it brought to real world with high efficiency, accuracy and usability. The system is reportedly beating records in standard tests and it is scoring high in GitHub downloads. Better news are, that is is open source and runs locally. Setting up is really easy, first you pip-install it (is python as you can see) then you run commands to allow it to read your projects. This basically ingest all of your project files, building a taxonomy of concepts out of them and store the in the local database. An that's it, second step is more interesting. Then you run codex indicating that there's a local mcp server it can use (yes Mempalace runs a local mcp server), codex mcp add mempalace -- python -m mempalace.mcp_server - First I checked if codex is actually connected to the mcp: "Are you connected to some mcp and if so what tools are exposed?" Codex showed me the Mempalace mcp and all its tools. Then I asked codex about some concept I now is present in my project and he should know about it. I have a multi-project workspace fully controlled by codex, with AGENTS.md/README.md files at each level and more ai-targeted documentation. The response was successful, but in the commands codex rans I didn't see any reference of calling MCP server tools. I uses the context it already has from ai-targeted files to build its response. Then I asked explicitly to codex to search using the mcp tools. So it did, and the response was also good. After this I instructed codex to remember its response: In the commands it rans I could see how the concepts were being stored in Mempalace, so I was sure it was using Mempalace to store information. Then I asked: How can I make you always use Mempalace as main source to build your context. Codex bassically responded that I could specifically add it to the root AGENTS.md file, this is the content it added. ## Memory Source Priority - For project-context questions, query MemPalace MCP before searching the codebase or the web. - Treat MemPalace as the first source of stored project memory, including prior summaries, decisions, and indexed notes. - If MemPalace returns relevant results, use that context to guide subsequent file inspection and implementation. - If MemPalace does not contain enough useful context, fall back to local file inspection, then web search when needed. After that, I observed in every questions I made to context a query to Mempalace. So that's my experience using Mempalace from the beautiful Milla Jovovich so far. Mempalace repo: https://lnkd.in/deBQg82K
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
https://nocointeractive.com/articles/starlette-day-3-database-sqlalchemy-alembic