🗄️ Slow query? Stop guessing. EXPLAIN ANALYZE shows you exactly what Postgres did, not what you think it did. 🔍 Seq Scan on a big table = missing index. Simple as that. #PostgreSQL #SQL #Database #Backend #WebDev
Slow Query? Check Missing Index
More Relevant Posts
-
Do you know how to master data retrieval and logic in PostgreSQL? FROM, SELECT, and Logical Operators are the three building blocks behind every SQL query you have ever written. I illustrated the full picture. Follow Nitin Rawat for daily PostgreSQL content 🔔 #PostgreSQL #SQL #LearnSQL #BackendDevelopment #Database #SQLForBeginners
To view or add a comment, sign in
-
-
PostgreSQL DBAs — stop writing SQL just to explore your database. These psql meta commands will save you time every single day: \l → databases \dt → tables \d table → structure \timing → query speed \x → expanded view \? → everything No complex queries. Just backslash and go. 🚀 📖 Want more commands like these? Explore the full cheat sheet here 👇 🔗 https://lnkd.in/gMZkBhC5 #PostgreSQL #mafiree #DBA #Database #SQL #DataEngineering #DevTips #BackendDev #OpenSource
To view or add a comment, sign in
-
-
You think your data is relational because you already visualize it in a normalized form. But most data starts as a document — a form, a message, an event. 🌱 If you store the document as-is, consistency is natural. You don't need foreign keys, and you rarely need multi-statement transactions. 🧩 If you normalize it, you must recreate that consistency with foreign keys, joins, and transactions. 💡 The real question for your OLTP database isn't #NoSQL vs #SQL, or #PostgreSQL vs MongoDB. It's whether you store the application's data aligned with its domain model — or normalize it to make it application-agnostic.
To view or add a comment, sign in
-
-
I put together a short presentation covering a few PostgreSQL design basics that can save a lot of pain later. It covers: 🎯 normalization 🎯 keys and relationships 🎯 SQL JOINs 🎯 delete behavior 🎯 many-to-many tables The focus is on practical schema design, not just writing queries. #PostgreSQL #SQL #DatabaseDesign #Normalization #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
"Built a complete Grocery Delivery DB in PostgreSQL — 6 tables, real data, 25 SQL queries from basic to advanced. Sharing for anyone learning SQL! Honestly? I don't know everything yet. Some queries I wrote myself. Some I struggled with. Some I took help to understand. But that's exactly where I am right now — learning, practicing, and being consistent. This document has all 25 queries sorted Easy → Hard, with the schema and everything clean. #SQL #PostgreSQL #DataAnalytics #LearningInPublic #SQLPractice #DataAnalyst
To view or add a comment, sign in
-
PostgreSQL Composite Index. Your composite index is NOT slow. Your query is. Most developers create composite indexes and expect magic. But one small mistake — and the database stops using it the way you think. This is what actually happens: 👉 The index is ordered in one direction, it can efficiently handle only one range condition everything after that becomes filtering work. 👉 Add a second independent range — and you break the index. The database will: 1. Scan more data. 2. Filter rows again and hit the table in random order. Result: 🫠 slower queries, higher load, wasted performance. Composite indexes don’t make queries fast. Correct queries do. 🔥 #SQL #Databases #Performance #BackendDevelopment #SoftwareEngineering #mistakes #resolve
To view or add a comment, sign in
-
-
PostgreSQL Tip: Don’t Use GIN Index for “Normal” Data I’ve seen this mistake quite often in performance tuning discussions — using GIN indexes on regular scalar columns like TEXT, INT, or VARCHAR. Let’s clear this up. GIN (Generalized Inverted Index) is designed for: JSONB Arrays Full-text search (TSVECTOR) It indexes elements inside values, not the value itself. What happens if you use GIN on normal data? Slower INSERT/UPDATE operations Larger index size No performance gain for equality or range queries Query planner may ignore the index altogether Use the right index for the right job: B-Tree → equality, joins, sorting GIN → JSONB, arrays, full-text search GIN + pg_trgm → LIKE / ILIKE '%search%' BRIN → very large, sequential datasets Example (Correct Use Case): CREATE EXTENSION pg_trgm; CREATE INDEX idx_name_trgm ON users USING GIN (name gin_trgm_ops); Perfect for: WHERE name ILIKE '%raj%' Bottom line: Using GIN on normal columns doesn’t just not help — it can actually hurt your database performance. Choose indexes intentionally. PostgreSQL gives you power — but only if you use it wisely. #PostgreSQL #DatabaseOptimization #PerformanceTuning #BackendEngineering #DataEngineering #SQL #SoftwareArchitecture
To view or add a comment, sign in
-
-
38 days until PGConf.DE 2026. The precision for a NUMERIC type in some other databases can go up to 38. 10^38 fits into 128 bits.The precision for a NUMERIC type in PostgreSQL is up to 1000 for exact numbers. https://2026.pgconf.de/ #PGConfDE #Essen #PostgreSQL #SQL
To view or add a comment, sign in
-
You don't need to guess which index to add. PostgreSQL is already tracking everything you need. `pg_stat_user_indexes` tells you exactly how your indexes are being used. Here are the columns that matter: 𝗶𝗱𝘅_𝘀𝗰𝗮𝗻 -- How many times this index has been used for a scan. Zero means it's never been used since the last stats reset. You're paying write overhead and disk space for nothing. 𝗶𝗱𝘅_𝘁𝘂𝗽_𝗿𝗲𝗮𝗱 -- How many index entries have been returned by scans. High reads with low scans means each scan is reading a lot of the index (possibly a sign of poor selectivity). 𝗶𝗱𝘅_𝘁𝘂𝗽_𝗳𝗲𝘁𝗰𝗵 -- How many live table rows were fetched using this index. Compare with idx_tup_read -- a big gap means many dead or invisible tuples. Quick query to find unused indexes wasting space: SELECT indexrelname AS index_name, relname AS table_name, idx_scan, pg_size_pretty(pg_relation_size(indexrelid)) AS size FROM pg_stat_user_indexes WHERE idx_scan = 0 ORDER BY pg_relation_size(indexrelid) DESC LIMIT 10; I've seen databases carrying 100's of GB indexes with zero scans. Run this query. You'll probably find at least one. #PostgreSQL #Database #SQL #DevOps #IndexOptimization
To view or add a comment, sign in
-
Why Did PostgreSQL Consume 16 GB If work_mem Was Set to 64 MB? The answer from a PostgreSQL core investigation by Thomas Vondra (pgsql-hackers): What actually happened: The query plan contained a Hash Join node with a huge expected row count and large width. PostgreSQL tried to stay within work_mem, but Hash Join realized the data wouldn't fit in the allocated 64 MB. To solve this, it started splitting data into smaller chunks (batches) and spilling them to temp files on disk. Due to the enormous data volume, PostgreSQL created 1 million batches. Here's the math: 1 million batches = 1 million temp files for the hash table + 1 million files for the outer relation. Total: 2 million open BufFile objects. Each file gets an 8 KB buffer allocated in RAM. 2,000,000 × 8 KB = ~16 GB of memory! This behavior can become a cause of OOM killer on any PostgreSQL below version 16. The fix (committed to PostgreSQL 16+): https://lnkd.in/dGGq9WC3
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development