𝐎𝐮𝐫 𝐂𝐏𝐔 𝐡𝐢𝐭 𝟗𝟗% 𝐚𝐧𝐝 𝐧𝐨𝐭𝐡𝐢𝐧𝐠 𝐥𝐨𝐨𝐤𝐞𝐝 𝐰𝐫𝐨𝐧𝐠 We had rising CPU, slow APIs, and zero complex queries. Just plain UPDATEs. 𝐖𝐡𝐚𝐭 𝐰𝐞 𝐦𝐢𝐬𝐬𝐞𝐝 In PostgreSQL, UPDATE does not overwrite a row. It creates a new tuple and keeps the old one. Every update = more data. 𝐖𝐡𝐚𝐭 𝐡𝐚𝐩𝐩𝐞𝐧𝐞𝐝 𝐢𝐧 𝐩𝐫𝐨𝐝 • Same rows updated again and again • Dead tuples kept increasing • Tables silently bloated • Queries got slower over time 𝐓𝐡𝐞 𝐫𝐞𝐚𝐥 𝐢𝐬𝐬𝐮𝐞 Not bad queries. Not bad indexes. Just misunderstood database behavior. 𝐖𝐡𝐚𝐭 𝐟𝐢𝐱𝐞𝐝 𝐢𝐭 • Reduced unnecessary updates • Shortened transactions • Let vacuum catch up 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲 If your system updates the same rows frequently, you are not just updating data. You are creating more of it. And that adds up fast. #PostgreSQL #BackendEngineering #SystemDesign #DatabaseInternals #PerformanceOptimization #Scalability #SoftwareEngineering #MVCC #TechLearning #Java #SpringBoot
Ankit Gupta’s Post
More Relevant Posts
-
Your SQL is fast. Your system is still slow. Seen this more than once. The query is optimized. Indexes are in place. Execution time looks great. And yet — everything feels slow. Because the bottleneck is not always the database. It’s often: – too much data being loaded – too many hidden queries – too much work happening after the DB call Performance issues rarely live in one place. They live in the data flow. Have you seen this kind of situation? #backend #java #kotlin #performanceengineering #postgresql #jvm #scalability #optimization
To view or add a comment, sign in
-
Slow queries almost broke our payment system. Here's what fixed it. We had APIs taking 4–5 seconds to respond under high load. The culprit? Unoptimized PostgreSQL queries on a table with millions of rows. What we did: Step 1 — Added indexes on high-frequency query columns → Response time dropped immediately Step 2 — Replaced SELECT * with specific columns → Less data transfer, faster response Step 3 — Used pagination instead of fetching full results → Memory usage dropped significantly Step 4 — Analyzed slow queries using EXPLAIN ANALYZE → Found full table scans we didn't know existed Step 5 — Moved repeated DB calls to cache → DB load reduced by 40% Result: 4–5 seconds → under 500ms response time Same hardware. Same database. Just better queries. Most performance problems are not hardware problems. They are query problems. What's the worst slow query you've debugged? Drop it below 👇 #PostgreSQL #BackendDevelopment #Java #SystemDesign #DatabaseOptimization #SpringBoot #BackendEngineer #immediateJoiner #java
To view or add a comment, sign in
-
Thundering Herd Problem (When Everything Breaks at Once):- A caching layer to reduce database load for frequently accessed data. --- Problem I faced: Everything worked well… until cache expired. Suddenly: Huge spike in database queries CPU usage shot up API latency increased System became unstable All at the same moment. --- How I fixed it:- This was the Thundering Herd Problem. When cache expired, multiple requests tried to fetch fresh data simultaneously. Fixes applied: Added cache locking (single-flight) so only one request refreshes data Introduced randomized cache expiry (TTL jitter) to avoid simultaneous expiration Used stale-while-revalidate approach for smoother refresh Now: Only one request hits DB Others wait or get cached response System stays stable. --- What I learned:-- Caching reduces load… but poorly managed caching can create bigger spikes than no cache at all. --- Question? Have you ever seen your system fail not because of traffic… but because many requests did the same thing at the same time? #Java #SpringBoot #Programming #SoftwareDevelopment #Cloud #AI #Coding #Learning #Tech #Technology #WebDevelopment #Microservices #API #Database #SpringFramework #Hibernate #MySQL #BackendDevelopment #CareerGrowth #ProfessionalDevelopment #RDBMS #PostgreSQL #backend
To view or add a comment, sign in
-
Debugging a real production slowdown Recently, I ran into a performance issue where a production homepage was taking 4–5 seconds just to respond. At first glance, everything looked fine: → CPU usage was low → Memory was stable → No obvious bottlenecks But the experience clearly said otherwise. What I discovered: → The application was running inside a Dockerized environment → During each page load, the database container CPU spiked to ~95% → The slowdown wasn’t constant — only triggered by specific queries This pointed to a deeper issue: inefficient query execution under certain conditions What I did: → Investigated live query behavior and request timing → Identified repeated heavy lookups during page load → Optimized the database by introducing proper indexing for frequent access patterns Result: → Database CPU usage dropped significantly → Response time improved from ~4–5s → ~0.5s → Overall system became much more stable under load Key takeaway: → Not all performance issues require scaling infrastructure. → Sometimes, the real problem is hidden in how data is accessed. #Docker #Laravel #MySQL #PerformanceOptimization #BackendEngineering #Debugging #SoftwareDevelopment
To view or add a comment, sign in
-
Spring Data Interview Question - Orders API Suddenly Slows Down Scenario You are a backend engineer on a Spring Boot microservice managing customer orders. The service uses PostgreSQL. Problem The API has always been fast (<20ms) for a table with 8,000 orders. Suddenly, it takes 1.5–2 seconds. Metrics: CPU, memory, and DB connections are normal. PostgreSQL statistics show: SELECT relname, n_live_tup, n_dead_tup FROM pg_stat_user_tables WHERE relname = 'orders'; relname | n_live_tup | n_dead_tup orders | 8,000 | 4,000,000 Why does PostgreSQL accumulate millions of dead rows even though the table has only 8,000 active orders? Detailed explanation with solution: https://lnkd.in/ez7sJxnt Subscribe and join 6.8k Spring and Java devs: https://lnkd.in/gwiRqWBV
To view or add a comment, sign in
-
-
Your API works fast locally… But becomes slow in production. Why does this happen? 👉 I’ve seen this multiple times in real systems. --- ❌ Common reasons: 1. N+1 Queries → One request triggers multiple DB calls 2. Blocking operations → Threads waiting unnecessarily 3. No caching → Repeated DB hits for same data 4. Poor database design → Unoptimized queries & indexes --- ✅ What actually helps: ✔️ Use caching (Redis) ✔️ Optimize queries & indexing ✔️ Use async processing where needed ✔️ Monitor performance (logs/metrics) --- 🧠 Reality: Performance issues don’t appear in development… They show up under real traffic. --- 💬 Curious: What’s the biggest performance issue you’ve faced in production? #Java #Backend #Performance #SystemDesign #Microservices #LearningInPublic
To view or add a comment, sign in
-
I've run into this antipattern several times over the years. It's not a problem to have a big database or a busy database, but things start to go wrong when you combine the two, especially when doing it with multiple databases per cluster. I figured it was finally time to talk about it.
Vertical scaling Postgres works... until it doesn't. And the wall is architectural, not hardware. One instance hosting multiple databases hums along while workloads play nicely. The moment their I/O profiles, vacuum requirements, or activity patterns become fundamentally different, shared resources can become a shared throttle. No amount of additional RAM, CPU, or storage fixes that. The early signals are easy to miss: → Autovacuum falling behind on some databases while others are fine → Replica lag climbing during a batch job in an unrelated DB → Checkpoint duration creeping up → Multixact warnings in logs no one has alerts for By the time XID wraparound threatens the whole instance, you've usually ignored a dozen smaller signs. If you're hosting a growing number of databases on one instance (or a shrinking number of exceptionally large ones): read the source, do the math on your multixact headroom, and check whether autovacuum is keeping pace across all of them. And remember, planning a split or a migration is a lot easier than executing one during an incident. Learn more in Shaun Thomas' latest PG Phriday: 🐘 https://hubs.la/Q04dn5X20 #postgres #postgresql #sql #data #database #opensource #programming #dba #devops #postgresqldba
To view or add a comment, sign in
-
-
💥 H2 vs MySQL: Choosing the right database is not just a technical decision — it directly impacts performance, scalability, and development speed. While working with Spring Boot, I explored the difference between H2 and MySQL, and this simple comparison helped me understand when to use each. For quick testing and learning, lightweight solutions like H2 make development faster. But for real-world applications, a reliable and persistent database like MySQL becomes essential. The key is not which one is better — 👉 it’s about using the right tool at the right time. 👉 H2 (In-Memory Database) 👉 MySQL (Relational Database) #SpringBoot #Java #DatabaseDesign #BackendDevelopment #SoftwareEngineering #LearningInPublic
To view or add a comment, sign in
-
-
Vertical scaling Postgres works... until it doesn't. And the wall is architectural, not hardware. One instance hosting multiple databases hums along while workloads play nicely. The moment their I/O profiles, vacuum requirements, or activity patterns become fundamentally different, shared resources can become a shared throttle. No amount of additional RAM, CPU, or storage fixes that. The early signals are easy to miss: → Autovacuum falling behind on some databases while others are fine → Replica lag climbing during a batch job in an unrelated DB → Checkpoint duration creeping up → Multixact warnings in logs no one has alerts for By the time XID wraparound threatens the whole instance, you've usually ignored a dozen smaller signs. If you're hosting a growing number of databases on one instance (or a shrinking number of exceptionally large ones): read the source, do the math on your multixact headroom, and check whether autovacuum is keeping pace across all of them. And remember, planning a split or a migration is a lot easier than executing one during an incident. Learn more in Shaun Thomas' latest PG Phriday: 🐘 https://hubs.la/Q04dn5X20 #postgres #postgresql #sql #data #database #opensource #programming #dba #devops #postgresqldba
To view or add a comment, sign in
-
-
"How do you handle slow database queries in Spring Boot?" This comes up in almost every backend interview Most developers jump straight to indexing But that is only part of the answer The real question is why is the query slow in the first place Common causes N+1 queries hitting the database repeatedly Fetching more data than needed Missing pagination on large datasets Wrong fetch type EAGER instead of LAZY Before adding indexes check these Use @Query with JOIN FETCH to avoid N+1 Select only the fields you need not the entire entity Add pagination with Pageable for large results Set fetch = FetchType LAZY and load relations only when needed Indexes help but fixing the query design helps more What database optimization has saved you the most time #Java #SpringBoot #Database #BackendDevelopment #Optimization
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development