Speed Up API Calls with Python's lru_cache

called the same API endpoint 5 times in a row. without cache: 2.51s with lru_cache: 0.50s 5x faster. two lines of code. @functools.lru_cache(maxsize=128) def fetch_user(user_id): ... the cache info tells the real story: hits=4, misses=1 first call hits the actual API. next 4? served instantly from memory. this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second. lru_cache ships with Python. no libraries. just import functools. two lines between slow and fast. #Python #Backend #DataEngineering #Performance

  • text

How about this. Update the user data (score in that example) after the first cache hit and then request it again. Or better still, run your flask or fastapi app in a production server (gunicorn or uvicorn), with more than 1 worker. Then you are really in for a bad time 😅.

Nice one! While lru_cache is a lifesaver, it’s worth noting for those working with FastAPI or any async stack that they should look into alru_cache or similar wrappers to avoid blocking the event loop. Standard library gems like functools are why Python remains top-tier for backend dev.

See more comments

To view or add a comment, sign in

Explore content categories