Path vs. Query Parameters — Know the difference! One of the most common questions when building APIs is: "Should this go in the URL path or as a query string?" In FastAPI, the distinction is clean and easy to implement. 📍 Path Parameters: Used to identify a specific resource. Example: /users/{user_id} Use these when the data is mandatory to find the object. 🔍 Query Parameters: Used for filtering, sorting, or pagination. Example: /users?active=true&sort=desc Use these for optional parameters that modify the results. FastAPI is smart enough to distinguish them just by how you define your function arguments. If it's in the path, it's a Path Param. If it’s not, it’s a Query Param. Simple as that! 🚀 #Python #FastAPI #WebDevelopment #Backend #RESTAPI #CodingTips #30DaysOfFastAPI
Path vs Query Parameters in FastAPI
More Relevant Posts
-
A 40ms API became a 4ms API. Here's the only thing that changed. We were making 3 separate DB queries to assemble a response. Each was fast in isolation. Together, they were sequential — each waited for the previous. The fix: run them concurrently. In Python (asyncio), this went from: result_a = await get_a() result_b = await get_b() result_c = await get_c() To: result_a, result_b, result_c = await asyncio.gather(get_a(), get_b(), get_c()) That's it. No caching, no infra change, no complex refactor. The mental model that helps: always ask "are these operations actually dependent on each other?" before assuming they need to run in sequence. Most API latency problems aren't hard — they're just unexamined. #BackendDevelopment #PythonAsyncio #APIOptimization #SoftwareEngineering
To view or add a comment, sign in
-
called the same API endpoint 5 times in a row. without cache: 2.51s with lru_cache: 0.50s 5x faster. two lines of code. @functools.lru_cache(maxsize=128) def fetch_user(user_id): ... the cache info tells the real story: hits=4, misses=1 first call hits the actual API. next 4? served instantly from memory. this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second. lru_cache ships with Python. no libraries. just import functools. two lines between slow and fast. #Python #Backend #DataEngineering #Performance
To view or add a comment, sign in
-
-
.0158 vs .0005 for the cached version. So searching bing: "does python lru cache return previous objects" "Yes — Python’s built‑in functools.lru_cache returns the exact same object instance that was previously computed and cached, not a copy" The overhead is in the object being recreated each call with Python objects being known to have slow creation time. There are better options for performance like writing the API in C++ with pistache or crow. Testing the time with 4 million unique users requesting their user info 3 times would be more informative. Reading that the returned data is a user data object with the changing value being a score and a constant for the username, the code needs refactoring as it muddies two use cases together. The username only needs sent the first time then only if it is or has been updated. The score is better sent via a socket or websocket if it changes in realtime and requires input from the server to be calculated or not sent at all if it can be calculated client side. If it needs to be broadcast to other client network peers with their response sent back to other peers a message queue is needed but if the peers response does not matter, the main server can handle the broadcasting. Database queries that can not just be returned by directly querying the database are not conducive to caching or not useful if they change infrequently or are only needed once or a few times at most. Having less than 4 million users, giving each user their own database on a single server can be easier than writing APIs if the data is just database table views (and the service is paid, reducing risk of hacking from users plus database caching can be used across multiple client applications)
called the same API endpoint 5 times in a row. without cache: 2.51s with lru_cache: 0.50s 5x faster. two lines of code. @functools.lru_cache(maxsize=128) def fetch_user(user_id): ... the cache info tells the real story: hits=4, misses=1 first call hits the actual API. next 4? served instantly from memory. this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second. lru_cache ships with Python. no libraries. just import functools. two lines between slow and fast. #Python #Backend #DataEngineering #Performance
To view or add a comment, sign in
-
-
🚀 Top FastAPI Packages You Should Know in 2026 (Part 2) FastAPI already gives you speed. The real difference comes from the tools you add on top. These are a few packages I’ve been exploring recently while building APIs 👇 . . . . . . 💡 Small observation: Most performance gains don’t come from FastAPI itself. They come from how you structure, cache, and protect your APIs. If you’re interested in MCP + FastAPI use cases, I recently explored it here: https://lnkd.in/dbMFET_A Which one are you using in your projects right now? #FastAPI #Python #BackendDevelopment #APIs #WebDevelopment #OpenSource #DeveloperTools
To view or add a comment, sign in
-
Just shipped a new feature in my VS Code extension, CallFlow Tracer: automatic trace summarization. Performance traces usually give raw data, not answers. This feature turns complex call graphs into clear, actionable insights with one click. Identifies the slowest functions Shows exact time impact (percentages) Highlights bottleneck modules Suggests next optimization steps Provides complete trace statistics No more manually analyzing hundreds of nodes or guessing where the issue is. You get a clean summary of what to fix and why. Available now on the VS Code Marketplace — search “CallFlow Tracer”. What’s one performance insight you wish tools gave you automatically? #Python #VSCode #DeveloperTools #PerformanceOptimization #OpenSource
To view or add a comment, sign in
-
#Gemma 4 dropped yesterday -- four open models from 2B to 27B, running locally via Ollama with native function calling -- a great step forward for local model development. But if you've built pipelines on local models before, you know "supports function calling" and "returns valid output every time" aren't the same thing Paul Schweigert wrote a walkthrough on structured outputs with Gemma 4, pairing it with #Mellea -- an open-source Python library where Instruct-Validate-Repair is a key pattern. Declare a typed function, attach validation requirements, and automatically repair on failure 👓 Worth a read if you're building on Gemma 4: https://lnkd.in/eDXjmZV4 🍄 Mellea: https://mellea.ai/
To view or add a comment, sign in
-
LeetCode 1647. Minimum Deletions to Make Character Frequencies Unique: "A string s is called good if there are no two different characters in s that have the same frequency. Given a string s, return the minimum number of characters you need to delete to make s good. The frequency of a character in a string is the number of times it appears in the string. For example, in the string "aab", the frequency of 'a' is 2, while the frequency of 'b' is 1." Approach: Maintain two data structures one hash_table to store the frequencies of characters and other a set that stores the accepted frequency of characters. Iterate through the hash_table and for each value check if it is present or not in set, if not add it to the set, else decrease the value until you get a unique frequency or zero, mean while keep increment a count variable on each delete. #LeetCode #Python #DSA #DataStructures #CompetitiveProgramming #Coding #Algorithms #Strings #HashMap #Sets #InterviewPrep
To view or add a comment, sign in
-
-
FastAPI just unlocked a massive performance ceiling. 🚀 With the official release of FastAPI 0.136.0 supporting free-threaded Python (No-GIL) , I couldn't just read the changelog—I had to benchmark it. I ran a controlled, head-to-head comparison using identical code and identical hardware: ⚙️ Python 3.12 (GIL) vs. Python 3.13.0t (No-GIL) The result? A ~8x improvement in CPU-bound throughput. Same code. Same API. Zero changes. This is a game-changer for anyone running: 🔹 ML Inference APIs (real-time model serving) 🔹 Data Processing & ETL Workloads 🔹 CPU-Intensive Backend Services Is this the final nail in the coffin for the GIL bottleneck? Curious to hear what the Python backend community thinks. #FastAPI #Python #NoGIL #PerformanceEngineering #BackendDevelopment #Concurrency #MachineLearning
To view or add a comment, sign in
-
-
While working with databases in FastAPI, one small feature saved me a lot of time and effort: using dictionaries to handle data efficiently. Instead of manually writing each column while creating a new entry in the database, you can simply use: new_post = Post(**post.dict()) Here, post represents the request body (Pydantic model), and ".dict()" converts all the fields into a dictionary. This dictionary is then unpacked directly into the database model. Why is this useful? • No need to manually map each field • Cleaner and more readable code • Reduces chances of missing or incorrect fields • Speeds up development This approach becomes especially powerful when your table (like "post") has multiple columns. Rather than repeating yourself, you let Python handle it smartly. Small optimizations like this make backend development more efficient and enjoyable 🚀 #FastAPI #Python #BackendDevelopment #APIs #LearningJourney
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development