🚀 CRUD Operations with FastAPI & PostgreSQL Built a simple and powerful backend using FastAPI + PostgreSQL to perform: ✔️ Create ✔️ Read ✔️ Update ✔️ Delete ⚡ FastAPI for speed & validation 📊 PostgreSQL for reliable data storage 💡 CRUD is the backbone of every backend system—master it to build real-world applications. #FastAPI #Python #PostgreSQL #CRUD #BackendDevelopment
FastAPI & PostgreSQL CRUD Operations
More Relevant Posts
-
Ever wonder exactly how much time an index saves you under the hood? ⏱️ I spent some time this week benchmarking PostgreSQL. I injected 1,000,000 rows of data using Python and measured the raw latency between a full table scan and a B-tree index scan. The result? The indexed query wasn't just a little faster; it was an exponential leap, dropping latency to near zero by mapping directly to physical disk coordinates. 🚀 It was a great reminder that before reaching for massive system architectures, we should always make sure our basic data structures are optimized. Check out the article link in the comment section 👇. #PostgreSQL #Python #Backend #Tech
To view or add a comment, sign in
-
PydanTable 1.17.0 has been released, and MongoDB is now officially part of the story. This release introduces an optional MongoDB execution engine, allowing work to remain on the MongoDB database side when supported. This means you can materialize data only when it is actually needed in the application, rather than pulling full result sets into Python first. Additionally, this version adds integration with Beanie, a popular Python ODM (object-document mapper) for MongoDB built on Pydantic. If your application already models MongoDB documents with Beanie, PydanTable can seamlessly integrate with that layer, ensuring your document models and typed, table-shaped workflow remain aligned without the need for a parallel schema. For more details, check out the documentation and release notes: - PyPI: https://lnkd.in/ez4NZMjT - Documentation: https://lnkd.in/eV4RTqZQ - Repository: https://lnkd.in/eVpjrcRX #Python #Pydantic #MongoDB #DataEngineering #OpenSource
To view or add a comment, sign in
-
1.2s → 85ms. The LATERAL join 90% of devs have never written. Four nested Python loops. One slow endpoint. I replaced all of it with one Postgres LATERAL join. Most backend devs don't know it exists. Full breakdown (7 min read) → https://lnkd.in/gyPiyjsQ #PostgreSQL #Database #SQL #BackendEngineering #DataEngineering
To view or add a comment, sign in
-
-
ok, I'm not trying to overstate here the importance of this one, but MotherDuck just dropped a major update. You can now use Postgres to query motherduck databases. They mentioned in their docs that most popular pg packages work including psql and psycopg2. I tested this with pg8000 (python Postgres package) and it worked brain dead easy. Below is a screenshot of it in action: - side note: I prefer pg8000 over psycopg2 because it doesn't require those funky binary downloads and is more of a pure python package approach to query postgres. #duckdb #motherduck https://lnkd.in/ewqZnky7
To view or add a comment, sign in
-
-
🚀 Built a Lightweight File-Based Database in Python Over the past few days, I worked on a project called mini-DB — a simple yet powerful key-value database built from scratch using Python. 🔹 What it does: Stores data in JSON files 📁 Supports basic operations: create, get, set, update Uses file-level locking to handle concurrent access 🔒 Provides a clean CLI interface using argparse ⚡ 🔹 Why I built this: I wanted to understand how databases actually work under the hood — especially concepts like: Data persistence Concurrency control Storage abstraction Instead of just using tools like Redis or MongoDB, I tried building a minimal version myself. 🔹 Key learnings: Designing layered architecture (Storage → Connector → Interface) Handling race conditions with file locks Building CLI tools that feel like real-world systems Example usage: python tool.py master set user Jashan python tool.py master get user python tool.py master update user Singh 🔗 GitHub: https://lnkd.in/gXduQnez This project may look simple, but it gave me a deeper appreciation of how real databases manage data safely and efficiently. 💡 Next step: applying these concepts to more complex systems and scaling ideas further. Would love feedback or suggestions! #Python #BackendDevelopment #SystemDesign #Databases #CLI #LearningByBuilding
To view or add a comment, sign in
-
-
🚨 Faced an interesting SQL issue recently while working with AWS Batch and Python. Queries started taking longer, and parallel jobs were getting blocked — turned out to be due to a small setting: autocommit = false in pymssql. Wrote a quick blog on how this caused table locking and how we fixed it 👇 https://lnkd.in/d2YTE7aP Would love to hear if anyone faced something similar! hashtag #SQLServer hashtag #Python hashtag #LearningInPublic
To view or add a comment, sign in
-
🧪 You can just automate things! `scripts/mark_pr_files_viewed.sh` automates marking all unviewed files in a GitHub pull request as 'Viewed' using the GraphQL API. The script supports batching and multiple authentication methods. The script adds a dependency check for Python 3, removes a redundant and inefficient GraphQL call in the file-fetching loop, and optimises string escaping by using Bash's built-in parameter expansion instead of calling sed in a loop. 🌟 Grab the script for free here: https://lnkd.in/gXFj3JsG #bash #GraphQL #api #script
To view or add a comment, sign in
-
-
Your API is Slow? It’s Probably Not Your Code Most developers jump to optimize code first. In reality… 🔴 Problem: API latency caused by DB + external APIs 🟢 Fix: - Added caching layer (Redis) - Optimized DB indexes - Async calls for external services 💡 Result: Response time dropped from 3.2s → 400ms 💡 Lesson: Measure before you optimize. #DRF #API #Performance #Python
To view or add a comment, sign in
-
-
I just published a piece on something I keep seeing in Python APIs: using SQLAlchemy by default — even when it’s not needed After working more directly with PostgreSQL, I started questioning this habit. Because the database is not just storage — it’s a core part of performance and system behavior. In many APIs, especially simple or performance-critical ones, I’ve found that: - ORM adds unnecessary abstraction - raw SQL gives better control over query shape - PostgreSQL features are easier to leverage directly and in some cases, it actually improves performance due to lower overhead So I wrote about: ->> when ORM makes sense ->> when it becomes overengineering ->> and why I prefer asyncpg + raw SQL in many cases Do you stick with ORM everywhere, or go raw SQL when performance matters? https://lnkd.in/dzZ7xvCS #python #postgresql #fastapi #backend #softwareengineering
To view or add a comment, sign in
-
The secret? It's not the algorithm — it's the data model. Get the schema right and the code writes itself. Stack: FastAPI + PostgreSQL + Redis + custom computation engine. #DataEngineering #Python #Analytics
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development