Optimize Django Queries with Production Logging

Your Django app is lying to you. Every slow query in production? Logged as this: [WARNING] query took 1,847ms That's it. No SQL. No plan. No cause. Just a number mocking you. And you're expected to fix it. How? Reproduce it locally? Good luck — your dev DB has 200 rows. Prod has 4 million. Guess the index? Maybe. Probably wrong. Wait for it to happen again and stare harder? This is actually what most teams do. I got tired of this. So I built a 40-line interceptor that runs in production. Every slow query now logs this automatically: → Exact SQL → Execution time → Full EXPLAIN ANALYZE output → Buffer hits, seq scans, nested loops — all of it Before I even open Slack. How it works: → Hooks Django at the cursor level via connection_created signal → Times every query with monotonic_ns (zero drift) → Slow? Fires EXPLAIN ANALYZE on a separate connection → Never touches your active transaction → Structured JSON — straight into your log pipeline No dependencies. No middleware. No debug toolbar. No "works on my machine." The rule I live by now: You cannot fix what you cannot see in production. Not in dev. Not in staging. In production. #django #python #postgres #backend #softwareengineering

  • text

To view or add a comment, sign in

Explore content categories