🚀 FastAPI just unlocked something big. With FastAPI 0.136.0 officially supporting free-threaded Python (No-GIL), I wanted to test what actually changes in real-world APIs — beyond the hype. So I ran controlled benchmarks comparing: 🐍 Python 3.12 — with GIL ⚡ Python 3.13.0t — No-GIL Same code. Same API. No changes. 💥 The result: ~8× improvement in CPU-bound performance. That’s a massive shift for: 🤖 ML inference APIs 📊 Data processing workloads 🧠 CPU-heavy backend systems ⚙️ High-performance backend services This feels like a major step forward for Python backend performance. Curious to hear what others think about the No-GIL shift 👇 #FastAPI #Python #Performance #Backend #Concurrency #AI #NoGIL #ThinkAI
FastAPI Performance Boost with No-GIL Python 3.13.0
More Relevant Posts
-
🚀 FastAPI just unlocked something big. With FastAPI 0.136.0 officially supporting free-threaded Python (No-GIL), I wanted to see what actually changes in real-world APIs. So I ran controlled benchmarks comparing: • Python 3.12 (GIL) • Python 3.13.0t (No-GIL) Same code. Same API. No changes. 💥 Result: ~8× improvement in CPU-bound performance. This shift is huge for: • ML inference APIs • Data processing workloads • CPU-heavy backend systems I’ve broken down the full experiment, setup, and results here: 👉 https://lnkd.in/dQdrr5gE Curious to hear what others think about this shift 👇 #FastAPI #Python #Performance #Backend #Concurrency #AI
To view or add a comment, sign in
-
-
It’s amazing how the Python language has been developing over the years and how the team behind FastAPI has been keeping up with these changes. By using a “modern” Python-stack based on uv+FastAPI developers have been gaining so much “free” performance upgrades. This has real world impact way beyond the developer experience.
🚀 FastAPI just unlocked something big. With FastAPI 0.136.0 officially supporting free-threaded Python (No-GIL), I wanted to see what actually changes in real-world APIs. So I ran controlled benchmarks comparing: • Python 3.12 (GIL) • Python 3.13.0t (No-GIL) Same code. Same API. No changes. 💥 Result: ~8× improvement in CPU-bound performance. This shift is huge for: • ML inference APIs • Data processing workloads • CPU-heavy backend systems I’ve broken down the full experiment, setup, and results here: 👉 https://lnkd.in/dQdrr5gE Curious to hear what others think about this shift 👇 #FastAPI #Python #Performance #Backend #Concurrency #AI
To view or add a comment, sign in
-
-
Interesting to see FastAPI moving toward No-GIL 🔥 Curious how this would behave in real-world scenarios like document parsing or image processing and what role ProcessPoolExecutor or multi-worker setups would still play.
🚀 FastAPI just unlocked something big. With FastAPI 0.136.0 officially supporting free-threaded Python (No-GIL), I wanted to see what actually changes in real-world APIs. So I ran controlled benchmarks comparing: • Python 3.12 (GIL) • Python 3.13.0t (No-GIL) Same code. Same API. No changes. 💥 Result: ~8× improvement in CPU-bound performance. This shift is huge for: • ML inference APIs • Data processing workloads • CPU-heavy backend systems I’ve broken down the full experiment, setup, and results here: 👉 https://lnkd.in/dQdrr5gE Curious to hear what others think about this shift 👇 #FastAPI #Python #Performance #Backend #Concurrency #AI
To view or add a comment, sign in
-
-
🚀 FastAPI 𝗷𝘂𝘀𝘁 𝘂𝗻𝗹𝗼𝗰𝗸𝗲𝗱 𝘀𝗼𝗺𝗲𝘁𝗵𝗶𝗻𝗴 𝗯𝗶𝗴. With FastAPI 0.136.0 officially supporting free-threaded Python (No-GIL), I wanted to move beyond the hype and measure what actually changes in real-world APIs. So I ran controlled benchmarks comparing: • Python 3.12 (GIL) • Python 3.13.0t (No-GIL) Same code. Same FastAPI app. Zero changes to the source. 🔬 How I benchmarked it: I isolated CPU-bound workloads — the kind that the GIL historically serializes — and hit the endpoints with concurrent requests using a fixed thread pool. Both environments ran on identical hardware with warm-up rounds to eliminate JIT noise. No async tricks, no multiprocessing — pure threading, the way most real backends actually work. 💥 Result: ~8× improvement in CPU-bound throughput under concurrency. This isn't just a micro-benchmark win. It directly impacts: • ML inference APIs serving parallel requests • Data processing and transformation workloads • CPU-heavy backend systems under real load I've broken down the full experiment, setup, and results here: 👉 Medium Post : https://lnkd.in/guUZEyiV Curious — are you already running experiments with free-threaded Python, or waiting for broader ecosystem support? 👇 #FastAPI #Python #Performance #Backend #Concurrency #AI
To view or add a comment, sign in
-
-
FastAPI just unlocked a massive performance ceiling. 🚀 With the official release of FastAPI 0.136.0 supporting free-threaded Python (No-GIL) , I couldn't just read the changelog—I had to benchmark it. I ran a controlled, head-to-head comparison using identical code and identical hardware: ⚙️ Python 3.12 (GIL) vs. Python 3.13.0t (No-GIL) The result? A ~8x improvement in CPU-bound throughput. Same code. Same API. Zero changes. This is a game-changer for anyone running: 🔹 ML Inference APIs (real-time model serving) 🔹 Data Processing & ETL Workloads 🔹 CPU-Intensive Backend Services Is this the final nail in the coffin for the GIL bottleneck? Curious to hear what the Python backend community thinks. #FastAPI #Python #NoGIL #PerformanceEngineering #BackendDevelopment #Concurrency #MachineLearning
To view or add a comment, sign in
-
-
A plain LLM call says: "The answer to 6 × 7 is 42." An agent calls multiply(6, 7), gets 42, then says: "The answer is 42." One returns text. The other runs code. We put together langgraph-zero-to-agent at Theseus AI Lab. Free, open source, 4 modules. Python basics is all you need to start. Module 1: you wire the graph by hand. Every node, every edge, every tool binding. Module 2: same agent, rebuilt with create_agent(). You see what abstraction actually buys you. Module 3: a coding assistant with a local shell. It writes Python files and runs them on your machine. Module 4: OpenAI Responses API. Web search, code interpreter, reasoning built in. 🔗 https://lnkd.in/dpgwyqq8 #LangGraph #AIAgents #Python #OpenSource
To view or add a comment, sign in
-
-
𝐋𝐋𝐌𝐬 𝐝𝐨𝐧'𝐭 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐝𝐨 𝐚𝐧𝐲𝐭𝐡𝐢𝐧𝐠. They generate text. So when an “AI agent” queries a database, sends an email, or runs a command — something else is doing the real work. An orchestration layer most people never see. I wrote a breakdown of what that layer actually is — the 𝐑𝐞𝐀𝐂𝐓 𝐥𝐨𝐨𝐩 behind agent frameworks — built from scratch in ~40 lines of Python, with an interactive stepper to watch a full run end-to-end. Check out my blog post about this subject: → https://lnkd.in/e3K94--D #AIAgents #AgenticAI #LLM #AIEngineering #SoftwareEngineering
To view or add a comment, sign in
-
-
FastAPI + Python 3.13: The No-GIL Era is Here. I’ve been tracking the No-GIL (PEP 703) progress closely, and with FastAPI 0.136.0 officially supporting free-threaded Python, I had to put it to the test. I ran controlled benchmarks comparing Python 3.12 (GIL) vs. Python 3.13.0t (No-GIL). The setup: Same code, same API, zero modifications. The result? A massive ~8× improvement in CPU-bound performance. 💥 This isn't just a benchmark win; it’s a paradigm shift for: ML Inference: Faster local model serving. Data Processing: Real-time pipelines without the GIL bottleneck. High-Concurrency Backends: True multi-core execution. The era of "Python is slow for CPU tasks" is officially being challenged. I’ve documented the full experiment, setup, and raw data here: 👉 https://lnkd.in/gxf7tM3K
To view or add a comment, sign in
-
-
Graph Algorithms are behind many technologies we use all the time - like Google Maps or Netflix's recommendation engine. And in this guide, Oyedele teaches you about some of the most common ones. You'll learn about Breadth-First Search, Depth-First Search, Dijkstra's Algorithm, and more with Python code examples. https://lnkd.in/gfKK5fye
To view or add a comment, sign in
-
-
We had a Python service where every HTTP request took exactly 15 seconds. Never 12, never 18. When every operation lands on the same number, you're not measuring the operation, you're measuring a timeout. py-spy dump on a worker showed the main Python thread idle, the executor thread idle, and a hidden tokio-rt-worker thread parked on a futex. That third thread was the clue. The HTTP client (primp, Rust + tokio under the hood) holds a tokio runtime that does not survive fork(). Anything using a prefork model, including Celery, gunicorn --preload, and multiprocessing with the fork start method, produces children with a broken runtime that hangs on every request until an outer timeout fires. The fix was a one-line swap to curl_cffi (libcurl-based, fork-safe); end-to-end latency went from 15 s to 0.5 s with zero behavior change. The takeaway I keep relearning: mixing native runtimes with fork() is a footgun, and a too-clean number in your metrics is a clue, not a feature. #Python #Debugging #SoftwareEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development