called the same API endpoint 5 times in a row.
without cache: 2.51s
with lru_cache: 0.50s
5x faster. two lines of code.
@functools.lru_cache(maxsize=128)
def fetch_user(user_id):
...
the cache info tells the real story:
hits=4, misses=1
first call hits the actual API.
next 4? served instantly from memory.
this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second.
lru_cache ships with Python. no libraries. just import functools.
two lines between slow and fast.
#Python#Backend#DataEngineering#Performance
How about this. Update the user data (score in that example) after the first cache hit and then request it again. Or better still, run your flask or fastapi app in a production server (gunicorn or uvicorn), with more than 1 worker. Then you are really in for a bad time 😅.
Nice one! While lru_cache is a lifesaver, it’s worth noting for those working with FastAPI or any async stack that they should look into alru_cache or similar wrappers to avoid blocking the event loop. Standard library gems like functools are why Python remains top-tier for backend dev.
.0158 vs .0005 for the cached version. So searching bing: "does python lru cache return previous objects"
"Yes — Python’s built‑in functools.lru_cache returns the exact same object instance that was previously computed and cached, not a copy"
The overhead is in the object being recreated each call with Python objects being known to have slow creation time. There are better options for performance like writing the API in C++ with pistache or crow. Testing the time with 4 million unique users requesting their user info 3 times would be more informative.
Reading that the returned data is a user data object with the changing value being a score and a constant for the username, the code needs refactoring as it muddies two use cases together. The username only needs sent the first time then only if it is or has been updated. The score is better sent via a socket or websocket if it changes in realtime and requires input from the server to be calculated or not sent at all if it can be calculated client side. If it needs to be broadcast to other client network peers with their response sent back to other peers a message queue is needed but if the peers response does not matter, the main server can handle the broadcasting.
Database queries that can not just be returned by directly querying the database are not conducive to caching or not useful if they change infrequently or are only needed once or a few times at most. Having less than 4 million users, giving each user their own database on a single server can be easier than writing APIs if the data is just database table views (and the service is paid, reducing risk of hacking from users plus database caching can be used across multiple client applications)
called the same API endpoint 5 times in a row.
without cache: 2.51s
with lru_cache: 0.50s
5x faster. two lines of code.
@functools.lru_cache(maxsize=128)
def fetch_user(user_id):
...
the cache info tells the real story:
hits=4, misses=1
first call hits the actual API.
next 4? served instantly from memory.
this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second.
lru_cache ships with Python. no libraries. just import functools.
two lines between slow and fast.
#Python#Backend#DataEngineering#Performance
🚀 #100DaysOfPython – Day 3: Lambda Functions
👉 Lambda = small anonymous function (one line)
Example:
add = lambda a, b: a + b
print(add(2, 3)) # 5
Used commonly with:
nums = [1, 2, 3, 4]
squared = list(map(lambda x: x*x, nums))
✨ Short and quick
✨ Useful for simple operations
⚠️ But here’s the catch:
If your logic is more than one line → use a normal function.
🔍 My takeaway:
Lambdas are great for simple transformations, not for complex logic.
Read more: https://lnkd.in/eSSCUfmi#Python#Coding#100DaysOfCode#Developer
🚀 Python 3.13+ is a game-changer: Free-threading (no-GIL mode) and experimental JIT boost multithreaded code by 2-5x! Speed gains are real for CPU-heavy tasks.
Tested a simple parallel sum script—3x faster than 3.12. Python 3.15 stabilizes JIT fully. Here’s the snippet:
# Run with: python3.13 -X free-threading
import threading
def compute(n): return sum(i*i for i in range(n))
threads = [threading.Thread(target=compute, args=(10**7,)) for _ in range(4)]
for t in threads: t.start()
for t in threads: t.join()
print("Done!")
Who’s upgraded? Share your benchmarks below! 👇 #Python#Python313#Programming
We think document extraction should be simple.
Less than 10 lines of Python to extract structured data from any document.
Define your schema, send a file, get JSON back. About 10 lines of code. Uncertain fields get flagged and you decide what to do with them.
Learn how to define schemas: https://lnkd.in/g7TH8VmD
Day 41/100 – #100DaysOfCode 🚀
Solved LeetCode #2529 – Maximum Count of Positive Integer and Negative Integer (Python).
Today I practiced simple counting logic to determine whether positive or negative numbers are more in the array.
Approach:
1) Initialize two counters: neg = 0 and pos = 0.
2) Traverse the array element by element.
3) If the number is negative, increment neg.
4) If the number is positive, increment pos.
5) Return the maximum of neg and pos.
Time Complexity: O(n)
Space Complexity: O(1)
Strengthening fundamentals with simple counting techniques 💪
#LeetCode#Python#DSA#Arrays#ProblemSolving#100DaysOfCode
We keep working on the Profile API in @atomic-ehr/codegen — someday we'll find the right shape. v0.0.11 is another step.
Python:
→ Generate Pydantic models with typed FHIR extensions — access extensions as normal fields
TypeScript:
→ apply() a profile to a resource — fixed values are set automatically, no manual boilerplate
→ Work with slices as arrays — add multiple elements at once instead of one by one
→ validate() now tells you exactly which fields are missing inside each slice
→ Just use plain objects — no wrapper functions, better type checking from the compiler
Open source, MIT. Works with any FHIR server.
Release notes: https://lnkd.in/dRn5ajG8
GitHub: https://lnkd.in/djbmz4kF#FHIR#TypeScript#Python#OpenSource#HealthIT#DevEx
🚀 Day 6/30 of My LeetCode Journey (Python + SQL)
Consistency is slowly turning into confidence 💪📈
🔹 **Python Problem of the Day**
👉 *Plus One*
Given an integer represented as an array of digits, increment the number by one and return the resulting array.
💡 *Key Concept:* Handling carry from the last digit (especially edge cases like 9 → 10).
🔹 **SQL Problem of the Day**
👉 *Game Play Analysis I*
Given a table of player activity, write a query to find the first login date for each player.
💡 *Key Concept:* GROUP BY with MIN() to extract earliest dates.
Every day learning something new, refining logic, and improving speed ⚡
Day 6 done ✅
#LeetCode#30DaysChallenge#Python#SQL#CodingJourney#Consistency#ProblemSolving#Learning
Day 3/365: Comparing Two Strings Character by Character 🧵🧠
Today I worked on a simple but fundamental logic problem: checking if two strings are the same, without directly using a built-in equality check.
First, I compare the lengths of both strings. If lengths differ, they can’t be the same.
If lengths match, I loop through each index and compare characters one by one.
If any character is different, I break and print that the strings are not the same.
If the loop finishes without finding a mismatch, the else block of the for loop runs and prints that the strings are the same.
The interesting part is the for-else in Python.
The else only runs when the loop completes normally (no break).
This makes it a clean way to express: “if I didn’t find any mismatch in the entire loop, then the strings are equal.”
Day 3 done ✅
362 more to go.
#100DaysOfCode#365DaysOfCode#Python#LogicBuilding#StringComparison#ForElse#CodingJourney#LearnInPublic#AspiringDeveloper
LeetCode 1647. Minimum Deletions to Make Character Frequencies Unique: "A string s is called good if there are no two different characters in s that have the same frequency.
Given a string s, return the minimum number of characters you need to delete to make s good.
The frequency of a character in a string is the number of times it appears in the string. For example, in the string "aab", the frequency of 'a' is 2, while the frequency of 'b' is 1."
Approach: Maintain two data structures one hash_table to store the frequencies of characters and other a set that stores the accepted frequency of characters.
Iterate through the hash_table and for each value check if it is present or not in set, if not add it to the set, else decrease the value until you get a unique frequency or zero, mean while keep increment a count variable on each delete.
#LeetCode#Python#DSA#DataStructures#CompetitiveProgramming#Coding#Algorithms#Strings#HashMap#Sets#InterviewPrep
Most implementations of the State pattern in Python look very “clean”.
Lots of small classes. A base interface. One class per state.
But if you’ve ever worked with one in a real project, you know the downside: transitions are scattered, behaviour is hard to see in one place, and adding new states often means touching multiple files.
In today’s video, I rebuild the State pattern in a very different way. Instead of relying on inheritance, I make the state machine explicit as data and use decorators to define transitions. The result is a small, reusable engine where the entire flow becomes visible at a glance.
If you’re interested in writing Python that’s easier to reason about and extend, this is a pattern worth understanding.
👉 Watch here: https://lnkd.in/e9Y3xGNF.
#python#softwaredesign#designpatterns#statemachine#cleancode
How about this. Update the user data (score in that example) after the first cache hit and then request it again. Or better still, run your flask or fastapi app in a production server (gunicorn or uvicorn), with more than 1 worker. Then you are really in for a bad time 😅.