Async I/O vs Threading: The Real FastAPI Performance Secret Many developers use FastAPI because it’s “fast”, but few understand why it’s fast. The real reason? Asynchronous I/O (async/await) Let’s break it down 👇 🧩 1️⃣ Threading: Traditional Approach Threading works by running multiple threads concurrently. Each thread can process one task, but before Python 3.14, Python’s GIL (Global Interpreter Lock) means only one executes at a time per core. That’s fine for CPU-heavy tasks, but not ideal for I/O-heavy work (like DB queries or API calls). ⚙️ 2️⃣ Async I/O: Modern Approach Async I/O uses a single event loop to handle thousands of concurrent requests without blocking. When one request waits on I/O, another starts immediately. That’s how FastAPI achieves massive throughput. Example 👇 from fastapi import FastAPI import httpx, asyncio app = FastAPI() @app.get("/weather") async def get_weather(): async with httpx.AsyncClient() as client: res = await client.get("https://lnkd.in/dw-UPWaG") return res.json() ✅ Why this works: async with and await let the event loop handle multiple requests concurrently. Perfect for I/O-bound workloads like network calls or DB queries. 🧠 When to Use What Use Async I/O for APIs, web scraping, DB, or network-heavy apps. Use Threading/Multiprocessing for CPU-bound tasks (ML inference, heavy computation). Takeaway: FastAPI’s performance doesn’t come from magic, it comes from asynchronous design done right. Mastering async and await is how you unlock real backend scalability. #FastAPI #Python #BackendEngineering #AsyncIO #Concurrency #Microservices #SoftwareArchitecture #PerformanceEngineering #ScalableSystems
Why FastAPI is Fast: Async I/O vs Threading
More Relevant Posts
-
Picture this: dozens of requests hitting your FastAPI app at once. How does it handle them efficiently? Recently, I delved deeper to understand the underlying mechanism of concurrency and how FastAPI manages it. I believe other frameworks handle concurrency in a similar way. Here's my take: FastAPI endpoints defined with 𝗮𝘀𝘆𝗻𝗰 𝗱𝗲𝗳 run as coroutines inside an event loop managed by an ASGI server (like Uvicorn). Multiple tasks are scheduled cooperatively—they aren’t truly running in parallel, but they switch intelligently when one task is waiting for I/O. The real performance boost comes with I/O-bound operations, like database queries or external API calls. Here’s where 𝗦𝗲𝘀𝘀𝗶𝗼𝗻 vs 𝗔𝘀𝘆𝗻𝗰𝗦𝗲𝘀𝘀𝗶𝗼𝗻 matters: • Session (synchronous) -• Blocks the current thread until the DB operation finishes. -• Works everywhere, but can slow down other requests in async endpoints. • AsyncSession (asynchronous) -• Non-blocking; the event loop can switch to other tasks while waiting for the DB. -• Perfect for async def endpoints under high load. -• When you await a query with AsyncSession, FastAPI doesn’t sit idle—it serves other requests efficiently without extra threads. FastAPI’s concurrency is smart task switching, not parallel CPU execution. Understanding Session vs AsyncSession helps you design endpoints that stay responsive and make full use of async I/O. Here's the visualization (Summary): #FastAPI #Python #AsyncIO
To view or add a comment, sign in
-
-
Messtone LLC object returns XDK rate limits. from xdk.client import Client clientmesstonellc=Client(auth=robertharper_auth) tweets = clientmesstonellc. tweets. timeline(user robertharper_id="12345", max_results=500) for tweets in tweets:print(tweets.text) TypeScript example for filtered stream Endpoint: import {Clientmesstonellc, streamEvent} from '@xdevplatform/xdk';const clientmesstonellc=new Client({bearerToken: 'robertharper-bearer-token'});const rule={and: [{value: 'python lang:en', tag: 'dev-chatter'}]}; await clientmesstonellc. stream. updateRules(rule);const =clientrobertharper. stream.posts( );stream.on(StreamEvent. Data, (Data)=>{console. log(Live:${data.text}'); });stream.on(StreamEvent. Error, (error)=>{console.error('Reconnecting.. ',error);});
To view or add a comment, sign in
-
We live in times where you can code faster than read. With this we have limitless options, but there are things you should reuse - handling chats/conversations Let's take the dual stack (ts + fastapi) nextjs for the web fastapi for the ai backend langgraph (py) for the agentic postgres for the database shadcn for the UI components What we are missing is a set of custom hooks that would handle the conversation and AI messages logic. Like sending, receiving, saving, editing, and more. There are three leading packages for nextjs ai-sdk - not supporting it at all, because they think AI applications are LLM wrappers not agentic systems copilotkit - works if your fastapi app is a one file with no complex dependencies and there is growing competition assistant-ui - never made it to work with custom (but documented) packages such as tanstack query or LocalRuntime / ExternalStoreRuntime So after 2 years since OpenAI API release, we are still creating custom hooks while those Open Source maintainers still try to build one-stop-shops. Should we not all build our custom libs in python / javascript like you do in Java, Rust or Ruby? PS: there are some packages you cannot port py -> js or js -> py #fastapi #nextjs #assistantUI #langgraph
To view or add a comment, sign in
-
-
I was so frustrated watching my Mac's disk space vanish into node_modules folders everywhere. So I decided to build something about it. I created a Python CLI called cleanup-nodemodule that recursively finds and removes build artifacts like node_modules, .next, and dist folders. The best part? It's completely safe by default with dry-run mode. This is also my first PyPI package. Thanks to AI, I figured out the entire publishing workflow so other devs can actually install and use it with pip. Honestly, publishing to PyPI felt intimidating, but breaking it down step by step made it doable. How to use it pip install cleanup-nodemodule # Safe dry-run first (shows what would be deleted) cleanup-nodemodule -p /path/to/project # Actually delete (when you're ready) cleanup-nodemodule -p /path/to/project --no-dry-run It's simple, safe, and saves a ton of disk space. I tested it on macOS and it works smoothly. If you try it and find bugs, please let me know or contribute. What other folders should I add? Thinking .turbo, .cache, .vercel, coverage. For more instruction click on comment link #python #javascript #nodejs #react #cli #opensource #devtools #productivity #firstproject #learning #ai #developer #coding
To view or add a comment, sign in
-
-
⚡ FastAPI Performance Tip: Async/Await Done Right Building RESTful APIs with FastAPI for 3+ years taught me this crucial lesson: ❌ Common mistake: Using async/await without actual async operations Many developers write async functions but still use blocking operations inside. Result? No performance gain, just overhead. ✅ Here's the right approach: # ❌ Wrong - Still blocking! async def get_user(user_id: int): user = db.query(User).filter(User.id == user_id).first() return user # ✅ Correct - Truly async! async def get_user(user_id: int): async with get_async_session() as session: result = await session.execute( select(User).where(User.id == user_id) ) return result.scalar_one_or_none() 💡 Key takeaways: • Use async database drivers (asyncpg, aiomysql) • Await ALL I/O operations (DB, HTTP, file reads) • Regular sync code in async functions = bottleneck • Use httpx instead of requests for HTTP calls 🚀 Result: 3-5x better throughput under high load! 🎯 Pro tip: Use FastAPI's dependency injection with async session management for clean, efficient code. Building APIs? What's your biggest FastAPI challenge? Drop it below! 👇 #FastAPI #Python #BackendDevelopment #APIs #WebDevelopment #SoftwareEngineering #AsyncProgramming #Performance
To view or add a comment, sign in
-
Why FastAPI is a Game-Changer for Modern Backends When I first tried FastAPI, I expected “just another Python framework.” But what I found changed how I build and think about APIs completely. Here’s why, 1. Speed — for both devs and servers Built on Starlette and Pydantic, FastAPI is blazingly fast, not just in response time, but also in development time. Type hints and automatic validation mean fewer bugs, less boilerplate. 2. Documentation that writes itself Every endpoint you define comes with interactive Swagger and ReDoc docs. No more separate Postman collections, your API explains itself. 3. Async support out of the box Concurrency is native. You can handle multiple requests at once without complex thread management. Perfect for high-performance apps, microservices, or real-time APIs. 4. Type safety = fewer surprises Your IDE becomes your debugging buddy, auto-suggestions, validation, and error catching before runtime. 5. Great fit for modern stacks From ML inference APIs to microservices and even serverless functions, FastAPI scales with you, not against you. #FastAPI #Python #BackendDevelopment #WebDev #APIs #TechLeadership
To view or add a comment, sign in
-
Flask vs FastAPI — Which Should You Choose in 2025? Both are powerful Python frameworks, but they serve different mindsets: Flask 1. The OG — simple, flexible, minimal. 2. Perfect for small APIs, prototypes, or legacy systems. 3.Huge ecosystem and community support. 4. lacks native async support and can feel slow for high-concurrency workloads. FastAPI 1. Built for the async era. 2. Type hints + automatic validation = fewer bugs and instant docs (Swagger UI!). 3.Blazing fast thanks to Starlette and Uvicorn under the hood. 4. Ideal for modern ML, data, or real-time backends. In short: If you want simplicity → go with Flask. If you want speed, scalability, and modern tooling → go with FastAPI. Example benchmark insight: FastAPI can handle 2–3x more concurrent requests than Flask when properly tuned. Flask = stable classic. FastAPI = future-ready engine. #Python #FastAPI #Flask #WebDevelopment #BackendEngineering #APIDesign #DeveloperTips
To view or add a comment, sign in
-
⚡ Flask vs FastAPI — Which Should You Learn in 2025? Both are powerful, both are popular — but they shine in different ways 👇 🔵 Flask --Simple and beginner-friendly --Great for small to medium web apps --Huge community + extensions --Perfect for learning backend fundamentals ⚪ FastAPI --Extremely fast (built for performance) --Async support out-of-the-box --Auto-documentation with Swagger --Best for modern REST APIs & microservices My suggestion: ➡️ If you're new to backend → Start with Flask ➡️ If you want speed, APIs, and modern architecture → Go for FastAPI Both are worth learning — depends on your goals 💡 💬 Which one are you using right now? #Python #Flask #FastAPI #BackendDevelopment #RESTAPI #WebDevelopment #APIs #SoftwareEngineering #ProgrammingLife
To view or add a comment, sign in
-
-
The Power of Dependency Injection (DI) in FastAPI Most developers love FastAPI for its speed, but few fully leverage one of its most underrated superpowers: Dependency Injection (DI). Dependency Injection in FastAPI isn’t just about organizing code, it’s about maintainability, scalability, and testability at scale. Here’s why 👇 When you define reusable logic (like database connections, authentication, or configuration) as dependencies using Depends(), you’re separating infrastructure from logic. This makes your API more modular, and drastically easier to test. Example: from fastapi import Depends, FastAPI from sqlalchemy.orm import Session from .database import get_db from .models import User app = FastAPI() def get_current_user(db: Session = Depends(get_db)): return db.query(User).first() @app.get("/profile") def read_profile(user = Depends(get_current_user)): return {"username": user.username} 🔍 Why it matters: Your database logic is isolated. Your routes remain clean and focused. You can mock dependencies during testing — no real DB needed. In large backend systems or microservices, this pattern keeps your code decoupled and future-proof. 🧠 Takeaway: Dependency Injection isn’t just a feature, it’s a mindset for building backend systems that scale without turning into spaghetti code. #FastAPI #BackendEngineering #Python #CleanArchitecture #Microservices #API #SoftwareDesign #ScalableSystems
To view or add a comment, sign in
-
-
A few weeks ago, a client complained that their dashboard took 10+ seconds to load data from an external API. The backend logs showed nothing unusual — requests completed successfully, just painfully slow. After profiling the system, the real issue became clear: The app was making multiple sequential API calls, waiting for each one to finish before starting the next. So even though each API took ~1s to respond, ten requests meant 10 seconds total delay. Here’s how we fixed it: 1. Introduced concurrency Used async requests (via Python’s aiohttp / Node’s Promise.all) to send all API calls at once. 2. Added caching layer Stored repeated responses temporarily to avoid redundant API calls. 3. Set timeouts + graceful fallback If one API slowed down, it wouldn’t block the entire page — users still got partial results fast. Result: Load time dropped from 10.4s → 1.3s, and user retention went up by 18%. Lesson: Performance isn’t always about the server or database. Sometimes, it’s about how and when you ask for data. #WebPerformance #APIOptimization #AsyncProgramming #SoftwareEngineering #BackendDevelopment #FullStackDevelopment #SystemDesign #TechLeadership
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development