I built the fastest Python logging framework.
446K ops/sec. 2.7x faster than stdlib. 20% faster than Microsoft's picologging, which is written in C.
It's a one-line migration:
import logging → from logxide import logging
Same getLogger(). Same format strings. Flask, Django, FastAPI all work.
Sentry and OTLP are built in. Zero config.
Wrote up the production guide with copy-paste examples.
⬇️ See comment
#Python#Rust#OpenSource
Why I call FastAPI the "Frankenstein" of Python frameworks. 🧟♂️
Most frameworks give you a finished product. FastAPI gives you a laboratory.
You choose the heart (Database), the brain (Logic), and the limbs (Dependencies). You stitch them together. The result?
Either a high-performance masterpiece or an ill-shaped disaster.
In my latest article, I break down how to "build your monster" the right way using:
✅ Modular structure
✅ SQLModel & Alembic
✅ Async operations
Find the article link in the first comment
One thing that significantly improved my Python code quality:
Static analysis is not optional at scale.
For a long time, I relied on code reviews to catch issues.
Eventually, I realized something:
👉 Humans are bad at consistently spotting patterns.
👉 Tools are not.
That’s where static analysis changed everything.
Without running the code, these tools analyze your source and detect:
bugs
code smells
complexity issues
type inconsistencies
All before production
The combination that worked best for me:
Ruff → fast linting and code quality
Replaces multiple tools (flake8, isort, etc.) and runs extremely fast
Mypy → type checking
Uses type hints to catch bugs before runtime, bringing discipline to Python’s dynamic nature
Radon → complexity analysis
Measures cyclomatic complexity and highlights functions that are hard to maintain.
#Python#StaticAnalysis#BackendEngineering#Django#CleanCode#SoftwareEngineering#DevOps
Stateful UDFs just changed how Python scales.
With @daft.cls, you can turn any Python class into a distributed operator that initialises once per worker and reuses state across every row.
That means models, API clients, and database connections no longer get rebuilt on every call.
The mental model stays simple: write normal Python classes, add a decorator, and Daft handles execution, scheduling, and parallelism.
Find out more:
https://lnkd.in/e79SePbN#PythonScaling#DaftCls#DistributedComputing#PythonClasses
Ever tried posting nested data in DRF and hit the dreaded “writable nested fields not supported” error?
In my latest article, I break down how to fix it cleanly — from overriding create() to adding transaction safety and optimizing with bulk_create().
You’ll learn:
Why nested serializers fail on POST
How .pop('items') saves your day
The right way to separate read/write serializers
Production-ready patterns for clean, scalable APIs
Read the full breakdown here 👇
🔗 https://lnkd.in/d-3qARwR#Django#RESTFramework#Python#BackendDevelopment#APIs#SoftwareEngineering
Every framework you have ever used is just design patterns written in production code.
Day 06 of 30 -- Design Patterns in Python Advanced Python + Real Projects Series
Django post_save is the Observer pattern. DRF renderer_classes is the Strategy pattern. logging.getLogger() is the Singleton pattern. @app.route is the Decorator pattern.
Most developers use all of these every day without knowing the names.
Today's Topic covers:
Why patterns exist and the 3-category decision framework 6 patterns every Python backend developer must know Singleton with double-checked locking for thread safety Factory with self-registering decorator pattern Observer event bus with decorator-based subscriptions Strategy using typing.Protocol for structural subtyping Real scenario -- Factory + Strategy + Observer in one order pipeline 6 mistakes including pattern hunting and Observer without error isolation 5 best practices including why Python functions are strategies
Key insight: Design patterns are not solutions you add to code. They are names for solutions already in your code.
Phase 1 complete -- 6 days of Python internals done.
#Python#DesignPatterns#SoftwareEngineering#BackendDevelopment#Django#FastAPI#100DaysOfCode#PythonDeveloper#TechContent#BuildInPublic#TechIndia#CleanCode#PythonProgramming#LinkedInCreator#LearnPython#PythonTutorial
I’ve published my first technical article: a walkthrough of the SOLID principles—with Python examples.
It started as “I’ve heard these letters everywhere—what do they actually mean in code?” Turning that into something concrete helped me more than skimming another diagram.
In the post I break things down into bite-sized pieces, including:
• Single Responsibility: One job per module—easier to reason about and change.
• Open/Closed: Extend behavior without rewriting existing code.
• Liskov Substitution: Subtypes that don’t break expectations.
• Interface Segregation: Small, focused contracts instead of fat interfaces.
• Dependency Inversion: Depend on abstractions, not concrete details.
Beyond the theory, each section includes short Python snippets so the ideas map to something you can run and tweak—not just memorize.
The full post is here:
https://lnkd.in/gFXSE4d9#SoftwareEngineering#SOLID#Python#CleanCode#OOP#DesignPatterns
𝗣𝘆𝘁𝗵𝗼𝗻 𝗠𝗲𝘁𝗮𝗰𝗹𝗮𝘀𝘀𝗲𝘀 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗱
Everything in Python is an object.
Classes are objects too.
They are instances of a metaclass.
Type is the default metaclass.
It builds your classes.
Call type to create a class without the class keyword.
Python follows four steps to build a class:
- Find the metaclass.
- Set up the namespace.
- Run the class body.
- Create the class object.
Use the new method to change the class before it exists.
Use the init method to record the class after it exists.
Most developers do not need metaclasses.
Use the init subclass method instead.
It handles registration and interface checks.
It is simpler to read.
Avoid metaclasses in app code.
They are hard to debug.
Use them for frameworks.
Otherwise, use decorators.
The best metaclass is the one you do not write.
Source: https://lnkd.in/gMfJU9Nx
called the same API endpoint 5 times in a row.
without cache: 2.51s
with lru_cache: 0.50s
5x faster. two lines of code.
@functools.lru_cache(maxsize=128)
def fetch_user(user_id):
...
the cache info tells the real story:
hits=4, misses=1
first call hits the actual API.
next 4? served instantly from memory.
this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second.
lru_cache ships with Python. no libraries. just import functools.
two lines between slow and fast.
#Python#Backend#DataEngineering#Performance
.0158 vs .0005 for the cached version. So searching bing: "does python lru cache return previous objects"
"Yes — Python’s built‑in functools.lru_cache returns the exact same object instance that was previously computed and cached, not a copy"
The overhead is in the object being recreated each call with Python objects being known to have slow creation time. There are better options for performance like writing the API in C++ with pistache or crow. Testing the time with 4 million unique users requesting their user info 3 times would be more informative.
Reading that the returned data is a user data object with the changing value being a score and a constant for the username, the code needs refactoring as it muddies two use cases together. The username only needs sent the first time then only if it is or has been updated. The score is better sent via a socket or websocket if it changes in realtime and requires input from the server to be calculated or not sent at all if it can be calculated client side. If it needs to be broadcast to other client network peers with their response sent back to other peers a message queue is needed but if the peers response does not matter, the main server can handle the broadcasting.
Database queries that can not just be returned by directly querying the database are not conducive to caching or not useful if they change infrequently or are only needed once or a few times at most. Having less than 4 million users, giving each user their own database on a single server can be easier than writing APIs if the data is just database table views (and the service is paid, reducing risk of hacking from users plus database caching can be used across multiple client applications)
called the same API endpoint 5 times in a row.
without cache: 2.51s
with lru_cache: 0.50s
5x faster. two lines of code.
@functools.lru_cache(maxsize=128)
def fetch_user(user_id):
...
the cache info tells the real story:
hits=4, misses=1
first call hits the actual API.
next 4? served instantly from memory.
this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second.
lru_cache ships with Python. no libraries. just import functools.
two lines between slow and fast.
#Python#Backend#DataEngineering#Performance
Source: https://github.com/Indosaram/logxide Blog post: https://devbull.xyz