Optimizing PostgreSQL Queries from 8s to 180ms

Our PostgreSQL queries were taking 8 seconds. I got them under 200ms. Here's exactly what changed and what didn't. Context: We were running a multi-tenant SaaS. As the user base grew, a core query started degrading badly. Users noticed. It had to be fixed. Step 1: EXPLAIN ANALYZE before touching anything -sql EXPLAIN ANALYZE SELECT * FROM bookings WHERE tenant_id = $1 AND status = 'confirmed' ORDER BY created_at DESC; Output showed: Sequential scan on 2M rows. No index on tenant_id + status. Step 2: Composite index -sql CREATE INDEX idx_bookings_tenant_status ON bookings (tenant_id, status, created_at DESC); This alone dropped the query from 8s to 900ms. Still not good enough. Step 3: Stop selecting * The query was pulling 40 columns. The UI needed 6. -sql SELECT id, title, start_time, end_time, status, attendee_id FROM bookings WHERE tenant_id = $1 AND status = 'confirmed' ORDER BY created_at DESC LIMIT 50; Down to 350ms. Step 4: N+1 was hiding in the ORM For each booking, the ORM was firing a separate query to fetch the attendee. Switched to a JOIN. -sql SELECT b.id, b.title, u.name as attendee_name FROM bookings b JOIN users u ON u.id = b.attendee_id WHERE b.tenant_id = $1 AND b.status = 'confirmed' ORDER BY b.created_at DESC LIMIT 50; Final result: 180ms. What didn't help: connection pooling tuning (already fine), adding a cache (premature fix the query first), switching databases (never needed to). 8 seconds → 180ms. No infrastructure changes. Just understanding what the database was actually doing. What's your go-to first step when a query goes slow? #PostgreSQL #SQL #DatabaseOptimization #QueryOptimization #BackendEngineering #FullStackDevelopment #SoftwareEngineering

To view or add a comment, sign in

Explore content categories