Mutable builtins in #Python cannot be dict keys. But instances of your (mutable) classes can!
class C:
def __init__(self, x):
self.x = x
c = C(10)
d = {c:10} # No error!
print(d[c]) # prints 10
You _can_, but I wouldn’t recommend it. Try this with your class in CPython 😕:
s = set()
for I in range(20000):
h = hash(C(i))
s.add(h)
assert len(s) == 1
But users beware! Another object with the same value for .x can’t be used to fetch the item from the dictionary. It must be the same object in this case. That can lead to unexpected surprises…
But this opens a whole new can of worms…
🚀 Python 3.13+ is a game-changer: Free-threading (no-GIL mode) and experimental JIT boost multithreaded code by 2-5x! Speed gains are real for CPU-heavy tasks.
Tested a simple parallel sum script—3x faster than 3.12. Python 3.15 stabilizes JIT fully. Here’s the snippet:
# Run with: python3.13 -X free-threading
import threading
def compute(n): return sum(i*i for i in range(n))
threads = [threading.Thread(target=compute, args=(10**7,)) for _ in range(4)]
for t in threads: t.start()
for t in threads: t.join()
print("Done!")
Who’s upgraded? Share your benchmarks below! 👇 #Python#Python313#Programming
Hugging Face Face just made GPU Kernels shareable like a Model.
The Kernel Hub allows Python libraries and applications to load compute kernels directly from the Hub. To support this kind of dynamic loading, Hub kernels differ from traditional Python kernel packages in that they are made to be:
1. Portable: a kernel can be loaded from paths outside PYTHONPATH.
2. Unique: multiple versions of the same kernel can be loaded in the same Python process.
3. Compatible: kernels must support all recent versions of Python and the different PyTorch build configurations (various CUDA versions and C++ ABIs). Furthermore, older C library versions must be supported.
https://lnkd.in/gqE5dkGv
Kernel Hub: https://lnkd.in/g7Bx9T2P
called the same API endpoint 5 times in a row.
without cache: 2.51s
with lru_cache: 0.50s
5x faster. two lines of code.
@functools.lru_cache(maxsize=128)
def fetch_user(user_id):
...
the cache info tells the real story:
hits=4, misses=1
first call hits the actual API.
next 4? served instantly from memory.
this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second.
lru_cache ships with Python. no libraries. just import functools.
two lines between slow and fast.
#Python#Backend#DataEngineering#Performance
.0158 vs .0005 for the cached version. So searching bing: "does python lru cache return previous objects"
"Yes — Python’s built‑in functools.lru_cache returns the exact same object instance that was previously computed and cached, not a copy"
The overhead is in the object being recreated each call with Python objects being known to have slow creation time. There are better options for performance like writing the API in C++ with pistache or crow. Testing the time with 4 million unique users requesting their user info 3 times would be more informative.
Reading that the returned data is a user data object with the changing value being a score and a constant for the username, the code needs refactoring as it muddies two use cases together. The username only needs sent the first time then only if it is or has been updated. The score is better sent via a socket or websocket if it changes in realtime and requires input from the server to be calculated or not sent at all if it can be calculated client side. If it needs to be broadcast to other client network peers with their response sent back to other peers a message queue is needed but if the peers response does not matter, the main server can handle the broadcasting.
Database queries that can not just be returned by directly querying the database are not conducive to caching or not useful if they change infrequently or are only needed once or a few times at most. Having less than 4 million users, giving each user their own database on a single server can be easier than writing APIs if the data is just database table views (and the service is paid, reducing risk of hacking from users plus database caching can be used across multiple client applications)
called the same API endpoint 5 times in a row.
without cache: 2.51s
with lru_cache: 0.50s
5x faster. two lines of code.
@functools.lru_cache(maxsize=128)
def fetch_user(user_id):
...
the cache info tells the real story:
hits=4, misses=1
first call hits the actual API.
next 4? served instantly from memory.
this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second.
lru_cache ships with Python. no libraries. just import functools.
two lines between slow and fast.
#Python#Backend#DataEngineering#Performance
Have you ever needed to add a math description for your Python function but found it time-consuming?
Non-programmers cannot easily read Python logic. However, manually converting it to LaTeX is slow and quickly becomes outdated as the code changes.
latexify_py solves this with a single decorator, generating LaTeX directly from your function so the math stays readable and always in sync with the code.
Key capabilities:
• Three decorators for different outputs: expressions, full equations, or pseudocode
• Displays rendered LaTeX directly in Jupyter cells
• Functions still work normally when called
Plus, latexify_py is open source! Install it with "pip install latexify-py".
🚀 Article on 3 tools that convert Python code to LaTeX: https://bit.ly/4dS4gOB
☕️ Run this code: https://bit.ly/4bW2ycE#Python#LaTeX#DataScience
Have you ever needed to add a math description for your Python function but found it time-consuming?
Non-programmers cannot easily read Python logic. However, manually converting it to LaTeX is slow and quickly becomes outdated as the code changes.
latexify_py solves this with a single decorator, generating LaTeX directly from your function so the math stays readable and always in sync with the code.
Key capabilities:
• Three decorators for different outputs: expressions, full equations, or pseudocode
• Displays rendered LaTeX directly in Jupyter cells
• Functions still work normally when called
Plus, latexify_py is open source! Install it with "pip install latexify-py".
🚀 Article on 3 tools that convert Python code to LaTeX: https://bit.ly/4dS4gOB
☕️ Run this code: https://bit.ly/4bW2ycE#Python#LaTeX#DataScience
Day 41/100 – #100DaysOfCode 🚀
Solved LeetCode #2529 – Maximum Count of Positive Integer and Negative Integer (Python).
Today I practiced simple counting logic to determine whether positive or negative numbers are more in the array.
Approach:
1) Initialize two counters: neg = 0 and pos = 0.
2) Traverse the array element by element.
3) If the number is negative, increment neg.
4) If the number is positive, increment pos.
5) Return the maximum of neg and pos.
Time Complexity: O(n)
Space Complexity: O(1)
Strengthening fundamentals with simple counting techniques 💪
#LeetCode#Python#DSA#Arrays#ProblemSolving#100DaysOfCode
#Python threads do not always make code faster. This article explains where threading helps, where it slows things down, and why understanding workload type matters for better performance decisions.
#Threading#Blog#Famro#Informational
Read more: https://lnkd.in/dKX63JNb
🚀 Built a PDF Text Extractor using Python & Streamlit!
I often needed a quick way to extract text from PDFs without heavy software. So, I built one myself.
📄 Upload any PDF, and it instantly extracts all the text from every page — clean and simple.
⚙️ The main challenge was handling multi-page PDFs accurately across different formats using PyPDF2.
🛠️ Tech Stack:
•Python 3.11.9
• Streamlit
• PyPDF2
🔗 GitHub: https://lnkd.in/gvFFf2yA
Would love your feedback and suggestions! 🙌
#Python#Streamlit#OpenSource#PythonDeveloper
Just shipped LLMPrice 🚀
A lightweight Python + TypeScript library for LLM pricing lookup.
- 2500+ models
- Offline-first
- Search / compare / CLI
- Auto-synced pricing data
Built for developers who want pricing data only, without pulling in a heavy LLM stack.
Links in first comment.
#Python#TypeScript#LLM#OpenSource#AIEngineering#DeveloperTools
You _can_, but I wouldn’t recommend it. Try this with your class in CPython 😕: s = set() for I in range(20000): h = hash(C(i)) s.add(h) assert len(s) == 1