I thought LIMIT makes queries fast… until it didn’t I used to think: If I add LIMIT 10, the query should be fast. But in one case, even with LIMIT, the query was still slow. So I checked EXPLAIN ANALYZE. What I saw was something like: Limit → Sort → Join → Seq Scan Which basically means the database was: * scanning a large dataset * doing joins * sorting everything * and only then applying LIMIT So LIMIT was just reducing the output, not the actual work. That’s when it clicked for me: LIMIT helps only when it’s applied early. Something like: Index Scan → Limit Here: * database reads already sorted/indexed data * stops early * avoids unnecessary processing In my case, the fix was pretty straightforward: * added index on ORDER BY column * adjusted the query so sorting could use the index * reduced the amount of data before sorting After that, the plan changed and performance improved. One small learning from this: LIMIT doesn’t make a query fast by default it depends on how the query is executed Curious if you’ve seen similar cases where LIMIT didn’t help at all. #PostgreSQL #BackendEngineering #DatabasePerformance #SystemDesign #PerformanceOptimization #QueryOptimization
When LIMIT Doesn't Make Queries Fast in PostgreSQL
More Relevant Posts
-
Recently ran through LeetCode’s SQL 50 to rigorously brush up on my database logic. It’s one thing to write a query that returns the right answer; it’s another to write one that scales efficiently under the hood. Here are a few core technical insights and optimizations I found most valuable to revisit: ⚡ EXECUTION & OPTIMIZATION • Favor GROUP BY over Window Functions: Window functions compute values per row without collapsing the dataset, leading to massive memory duplication in large tables. GROUP BY efficiently compresses data into unique combinations first. • Eliminate Multi-Pass Scans: Relying on nested IN and MAX() subqueries forces the database engine to evaluate and scan the same table multiple times. • The "Aggregate-Sort-Limit" Pattern: The most computationally efficient way to find a top record is often to compute the metric, sort it, and slice the top off (GROUP BY ... ORDER BY ... LIMIT 1). You can also leverage your ORDER BY clause to handle secondary tie-breakers natively in a single pass. 🧠 LOGIC & EDGE CASES • The "Aggregate Hack" for NULLs: An empty set is not the same as a NULL value. If an API requires a 1-row NULL result instead of an empty table, wrapping the target in an aggregate function like MAX() or MIN() forces the database to return a single NULL row even if no matches are found. • Rate = Average: Calculating a binary success rate is mathematically identical to taking the average of 1s and 0s. There's rarely a need to count the numerator and denominator separately. • Know Your Ranking Functions: DENSE_RANK() is crucial when finding top unique values (like salaries) because it assigns the same rank to ties without skipping subsequent numbers (1, 1, 2). Using RANK() (1, 1, 3) or ROW_NUMBER() will break your logic on ties. A great exercise in writing cleaner, more highly optimized SQL. Next up: tackling the Advanced SQL 50 track. 📊 #SQL #DataScience #DataEngineering #LeetCode #DatabaseOptimization #PostgreSQL #SQL50
To view or add a comment, sign in
-
-
My updates on DB(SQL) Learning this week! 'Foreign key' is a column in one table which relate to another table's 'primary key'. Managing the data using CASCADE(deletion upto connected references), SET NULL(soft deletion) and RESTRICT(no deletion at all). Learned and understood INNER , LEFT and FULL OUTER JOIN how they are useful based on the requirements. Then I come to know about 'EXPLAIN ANALYZE' diagnostic tool shows realtime statistics! further deep dive into INDEXING how we create index and how its reduce the query execution time to 'B-Tree , O(log n)' and something 'non-key value index' which store the value on its leaf level to reduce the lookup ! Next, I learnt about the Transactions which include Begin , Update , Commit or Rollback and got to know about the 'dirty read' is basically showing the values before commit which is not good! PostgreSQL don't have 'dirty read'. Finally Learnt ACID compliance in databases which has 4 things , ATOMICITY means its neither half or fraction, it will be full else Rollback ie, transactions must be fully commit, then we have CONSISTENCY which mean transaction brings form one valid state to another. ISOLATION , concurrent transactions don't interface with each other and last is DURABILITY which means once transaction is Committed it remains permanently recored even in the event of system crash or power outage!!!! #PostgreSQL #SQL #Database
To view or add a comment, sign in
-
What if I told you most database teams are missing the #1 performance bottleneck hiding in plain sight? Here's the query analysis trick that instantly identifies performance issues: using EXPLAIN ANALYZE with BUFFERS to see actual I/O patterns. Run EXPLAIN ANALYZE BUFFERS on your slowest queries to see actual buffer hits, reads, and execution time. Focus on the ratio of actual rows to planned rows - if it's significantly off, your statistics need updating or you're missing indexes. IMPLEMENTATION STEPS: • Identify your slowest queries from pg_stat_statements • Run EXPLAIN ANALYZE BUFFERS on each query • Look for high buffer reads (indicates disk I/O) vs buffer hits (memory) • Check if actual rows significantly differ from estimated rows • Focus optimization efforts on queries with high I/O or poor row estimation BENEFIT/RESULT: Instant identification of performance bottlenecks leading to 3-10x query improvements through targeted indexing or statistics updates. MULTI-LEVEL VALUE: Beginners: Learn to read execution plans with actual performance data Intermediate: Diagnose and fix query performance issues systematically Advanced: Implement automated query performance monitoring Decision Makers: Reduce database infrastructure costs by optimizing existing resources Drop a comment or send a connection request. #PostgreSQL #QueryOptimization #DatabasePerformance #EXPLAIN #dougortiz
To view or add a comment, sign in
-
Day 25: Execution Plans — Reading the Database’s Mind 🧠 "Your SQL query is just a suggestion. The database decides how to actually do the work." When you hit 'Execute,' the database doesn't just start running. It hands your code to a Query Optimizer, which looks at your indexes, table sizes, and join types to create a "map" of the fastest way to get your results. This map is called an Execution Plan. If your query is slow, the Execution Plan is where the "crime scene" evidence is hidden. Think of an Execution Plan like a GPS Navigation App 🗺️: The Destination: Your SELECT statement (what you want). The Possible Routes: The database could use an Index (The Highway) or a Full Table Scan (The Side Streets). The Final Plan: The database chooses the route with the "lowest cost" (least amount of CPU and memory usage). Why you should care about "Cost": In an execution plan, every step has a "Cost %." If you see one step taking up 90% of the effort, you’ve found your bottleneck. Usually, it's a missing index or a "Nested Loop" that is spiraling out of control. SQL: -- For PostgreSQL or MySQL EXPLAIN ANALYZE SELECT * FROM orders o JOIN customers c ON o.customer_id = c.customer_id WHERE o.order_date > '2024-01-01'; #30DaysOfSQL #LearningInPublic #DataChallenge #DataAnalysis #CareerDevelopment #DataCommunity #innovation #technology #creativity #Future #futurism. #DataAnalytics #DataScience #DataEngineering #BusinessIntelligence
To view or add a comment, sign in
-
-
Not every slow query means bad query. Faced one strange issue recently. Same stored procedure… sometimes running in milliseconds sometimes taking few seconds At first we thought data issue. Then infra. But no. Actual problem was mix of things: - Parameter sniffing - Index fragmentation - Outdated statistics Execution plan was getting cached based on first parameter. For that data it was fine. But when different data came → same plan became worst choice. On top of that: fragmented indexes = more IO old stats = optimizer making wrong guesses We did few simple things: - updated stats - rebuilt indexes - used recompile where needed No big code change. But performance became stable. Big learning for me: SQL performance is not only about query writing. It’s about how SQL Server “thinks” about your data. Sometimes issue is not in your code… it’s in the plan behind it. #SQLServer #Backend #Performance #DotNet
To view or add a comment, sign in
-
🚀 Day 5 – SQL Learning Journey | Indexes & Optimization Today I explored one of the most powerful concepts in SQL — Indexes, which play a key role in improving query performance. 📚 What I learned today: 📌 Indexes & Types – Improve data retrieval speed – Clustered vs Non-Clustered indexes ⚖️ Clustered vs Non-Clustered – Clustered → Physical order of data – Non-Clustered → Logical structure (separate from table) 🛠️ Index Commands – CREATE INDEX – DROP INDEX – REBUILD INDEX 🎯 Index Strategy – Use indexes on frequently filtered columns – Covering Index for better performance – Smart index selection is important 🔑 Index Types – Unique Index – Composite Index – Filtered / Partial Index 💡 Key Takeaway: A well-designed index can turn a slow query into a fast one 🚀 But over-indexing can also hurt performance — balance is key! Learning not just SQL, but how to write optimized and scalable queries 💪 Code pushed to GitHub 📂 🔗 GitHub Repository: https://lnkd.in/gm8Mw8CE #SQL #Database #Performance #LearningJourney #AspNetDeveloper #TechGrowth #InterviewPreparation
To view or add a comment, sign in
-
-
🚀 #30DaysOfSQL – Day 28 Continuing my 30 Days of SQL Challenge to strengthen my SQL and database management skills. 📌 Topic for Day 28: STORED PROCEDURES – Reusable SQL Logic Today I learned about Stored Procedures, which allow us to save SQL queries as reusable blocks inside the database. This helps in reducing repetition and improving performance. 💡 Practice Example: Create a stored procedure to retrieve employees with salary greater than a given value. 🧠 SQL Example: CREATE PROCEDURE GetHighSalaryEmployees(IN min_salary INT) BEGIN SELECT name, salary FROM employees WHERE salary > min_salary; END; 📊 Key Learnings: • Stored procedures store reusable SQL logic • Reduce code duplication • Improve performance and security • Can accept input parameters Stored procedures are useful for automating repetitive database operations. Excited to continue learning with Day 29! #SQL #30DaysOfSQL #DataScience #LearningInPublic #SQLPractice #DataAnalytics
To view or add a comment, sign in
-
My query was taking 40 seconds to run. I added one index. It dropped to 0.3 seconds. Here's what I learned about SQL indexing: 1️⃣ Index the columns you filter by If you use a column in WHERE, JOIN, or ORDER BY — it's a candidate for an index. 2️⃣ Don't index everything Too many indexes slow down your INSERT and UPDATE operations. Be selective. Quality over quantity. 3️⃣ Composite indexes follow order An index on (country, city) helps queries filtering by country. It does NOT help queries filtering by city alone. 4️⃣ Use EXPLAIN to see what's happening Before adding an index, run EXPLAIN on your query. It shows exactly where the database is struggling. Indexing is one of the fastest wins in SQL performance. No rewriting. No refactoring. Just smarter structure.
To view or add a comment, sign in
-
🚀 SQL: The Skill That Quietly Decides Your System’s Performance One thing I’ve learned while working on backend systems it’s not always the code slowing things down it’s the queries. A simple API can become slow if the SQL behind it isn’t optimized. Here are a few things that made a real difference in my work 👇 • Writing queries is easy writing efficient queries is the real skill • Indexing properly can reduce response time from seconds to milliseconds • Avoiding unnecessary joins and selecting only required columns matters • Understanding execution plans helps identify bottlenecks quickly • Database performance directly impacts user experience In one of my projects, optimizing queries and adding proper indexing significantly reduced API latency during peak traffic. 💡 Good backend systems are not just about APIs they are built on strong database design and efficient queries. 💬 What’s one SQL optimization trick that worked for you? #SQL #Database #BackendDevelopment #PerformanceOptimization #SystemDesign #SoftwareEngineering
To view or add a comment, sign in
-
-
One of the most underrated skills in working with databases is writing clean and efficient queries. It’s not just about getting the correct result… It’s about how you get it. I’ve seen cases where a query works perfectly, but causes performance issues because it’s not optimized. Small improvements like: Avoiding unnecessary joins Using proper indexes Filtering data early Can make a huge difference. 👉 A good query gives results. 👉 A great query gives results efficiently. #SQL #Database #Performance #Backend #Engineering
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Krishna Kumar Yadav Insightful