Most developers equate slow APIs with bad code. However, the issue often lies elsewhere. Consider this scenario: You have a query that appears perfectly fine: SELECT o.id, c.name FROM orders o JOIN customers c ON o.customer_id = c.id Yet, the API is painfully slow. Upon checking the execution plan, you find: NESTED LOOP → TABLE ACCESS FULL ORDERS → INDEX SCAN CUSTOMERS At first glance, this seems acceptable. But here's the reality: for each row in orders, the database is scanning and filtering again. If orders contain 1 million rows, that's 1 million loops. The real issue wasn’t the JOIN; it was the database's execution method. After adding an index: CREATE INDEX idx_orders_date ON orders(created_at); The execution plan changed to: INDEX RANGE SCAN ORDERS → INDEX SCAN CUSTOMERS As a result, query time dropped significantly. Key lessons learned include: • Nested Loop is efficient only when: → the outer table is small → the inner table is indexed • Hash Join is preferable when: → both tables are large → there are no useful indexes • Common performance issues stem from: → full table scans → incorrect join order → missing indexes → outdated statistics A common mistake is this Java code: for (Order o : orders) { o.getCustomer(); } This essentially creates a nested loop at the application level (N+1 query problem). Final takeaway: Don’t just write queries; understand how the database executes them. That's where true performance improvements occur. If you've resolved a slow query using execution plans, sharing your experience would be valuable. #BackendDevelopment #DatabaseOptimization #SQLPerformance #QueryOptimization #SystemDesign #SoftwareEngineering #Java #SpringBoot #APIPerformance #TechLearning #Developers #Coding #PerformanceTuning #Scalability #DistributedSystems #DataEngineering #Debugging #TechTips #LearnInPublic #EngineeringLife
Slow APIs: Look Beyond the Code
More Relevant Posts
-
💡 Using @Transactional in a GET API — Do You Really Need It? Most developers think @Transactional is only for INSERT/UPDATE/DELETE operations… but what about GET APIs? 🤔 Let’s break it down in the simplest way possible 👇 ⸻ 🔹 What does @Transactional do? It tells Spring: 👉 “Wrap this method in a database transaction.” That means: • All DB operations inside run safely • Either everything succeeds OR everything rolls back ⸻ 🔹 But GET API only reads data… why use it? Good question 👍 Even in read operations, @Transactional can be useful in some cases: ⸻ ✅ 1. To Avoid Lazy Loading Errors If you’re using JPA/Hibernate and fetching related data (like user.getOrders()), you might face: ❌ LazyInitializationException 👉 Why? Because the DB session is already closed. 💡 Solution: Using @Transactional keeps the session open while data is being fetched. ⸻ ✅ 2. To Ensure Consistent Data Imagine: • You are reading multiple tables • Data is changing at the same time Without transaction: 👉 You might get inconsistent results With @Transactional: 👉 You get a consistent snapshot of data ⸻ ✅ 3. For Better Performance (Read-Only Mode) You can write: @Transactional(readOnly = true) 👉 This tells the database: • “I’m only reading data, not modifying it” ⚡ Result: • Better performance • Less overhead ⸻ ⚠️ When NOT to use it? Don’t blindly add it to every GET API ❌ Avoid if: • Simple single-table fetch • No lazy loading • No complex logic 👉 Because transactions also add overhead ⸻ 🔥 Simple Rule to Remember: ✔ Complex read → Use @Transactional(readOnly = true) ❌ Simple read → Skip it ⸻ 🧠 Final Thought @Transactional is not just for writing data — it’s about managing how you interact with the database safely and efficiently. ⸻ #Java #SpringBoot #BackendDevelopment #CodingTips #Developers #TechSimplified
To view or add a comment, sign in
-
🚀 Day 23/40 – DSA Challenge 📌 LeetCode Problem – Remove Duplicates from Sorted List 📝 Problem Statement Given the head of a sorted linked list, delete all duplicates such that each element appears only once. Return the modified linked list. 📌 Example Input: 1 → 1 → 2 → 3 → 3 Output: 1 → 2 → 3 💡 Key Insight Since the list is sorted, 👉 duplicates will always be adjacent. So we don’t need extra space or hashing. 🔥 Optimal Approach – Single Traversal 🧠 Idea Traverse the list and compare: Current node Next node If they are equal → skip the next node Else → move forward 🚀 Algorithm 1️⃣ Start from head 2️⃣ While current != null && current.next != null 3️⃣ If: current.val == current.next.val 👉 Skip duplicate: current.next = current.next.next 4️⃣ Else: Move to next node ✅ Java Code (Optimal O(n)) class Solution { public ListNode deleteDuplicates(ListNode head) { ListNode current = head; while (current != null && current.next != null) { if (current.val == current.next.val) { current.next = current.next.next; } else { current = current.next; } } return head; } } ⏱ Complexity Time Complexity: O(n) Space Complexity: O(1) 📚 Key Learnings – Day 23 ✔ Sorted data simplifies problems ✔ Linked list manipulation requires careful pointer handling ✔ No extra space needed when duplicates are adjacent ✔ Always check current.next != null to avoid errors Simple structure. Clean pointer logic. Efficient solution. Day 23 completed. Consistency continues 💪🔥 #180DaysOfCode #DSA #Java #InterviewPreparation #ProblemSolving #CodingJourney #LinkedList #LeetCode
To view or add a comment, sign in
-
-
🚀 DSL vs @Query in Spring Data JPA While working with Spring Data JPA, I learned that by default it provides methods to work with the primary key like findById(). But what if we want to fetch data using other fields like name, age, etc.? 🤔 We have two approaches 👇 🔹 1. Domain-Specific Language (DSL) List<User> findByName(String name); ✔️ Method Naming Convention ✔️ Query is automatically generated ✔️ Easy to write and read ✔️ Best for simple queries 🔹 2. @Query Annotation @Query("SELECT u FROM User u WHERE u.name = :name") List<User> getUserByName(String name); ✔️ Query is written manually (JPQL/SQL) ✔️ More flexibility ✔️ Best for complex queries (joins, multiple conditions) 💡 Key Difference: DSL → Simple & automatic @Query → Flexible & customizable 🎯 Conclusion: Use DSL for quick and simple queries, and switch to @Query when you need more control. #Java #SpringBoot #SpringDataJPA #BackendDevelopment #Coding #Developers #Learning
To view or add a comment, sign in
-
-
🚀 Mastering Persistence in Spring Data JPA: persist() vs. merge() vs. save() Ever wondered which method to use when saving data in Java? Choosing the wrong one can lead to unnecessary SQL queries or even dreaded EntityExistsException errors. Here is the breakdown of the "Big Three": 🔹 1. persist() – The "New Only" Approach What it does: Takes a brand-new (transient) entity and makes it managed. It schedules an INSERT. Best for: Creating new records when you are sure they don't exist yet. Watch out: It will throw an exception if the entity is already detached or has an ID that exists in the DB. 🔹 2. merge() – The "Reconnector" What it does: Takes a detached entity (one that was loaded in a different session) and copies its state onto a new managed version. Best for: Updating existing records that were passed through different layers of your app (e.g., from a REST controller). Watch out: It creates a copy. You must use the returned object for further changes! 🔹 3. save() – The Spring Data Way What it does: A smart wrapper provided by Spring Data JPA. It checks if the entity is "new." If yes, it calls persist(); if not, it calls merge(). Best for: Most standard repository patterns. It’s the "safe bet" for 90% of use cases. Watch out: Because it checks state first, it might trigger an extra SELECT query to decide whether to insert or update. 💡 Pro Tip: If you are building high-performance systems with massive inserts, using persist() directly via the EntityManager can sometimes be more efficient than the generic save() method. Check out the infographic below for a quick visual cheat sheet! 📊 #Java #SpringBoot #JPA #Hibernate #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
-
REST API Basics Every Developer Should Know 👇 GET → Fetch data POST → Create data PUT → Update full data PATCH → Update partial data DELETE → Remove data 💡 Bonus: Use proper HTTP status codes: 200 ✅ 201 ✅ 400 ❌ 500 ❌ Clean API = Professional developer 🚀 👉 Follow for backend mastery #restapi #backend #java #springboot #developers #coding #webdevelopment #softwareengineer #tech #learning #trending
To view or add a comment, sign in
-
The N+1 Query Problem — A Silent Performance Killer In one of my recent backend discussions, we revisited a classic issue that often goes unnoticed during development but can severely impact performance in production — the N+1 Query Problem. What is the N+1 Problem? It occurs when your application executes: 1 query to fetch a list of records (N items) Then executes N additional queries to fetch related data for each record Total = 1 + N queries Example Scenario: You fetch a list of 100 users, and for each user, you fetch their orders separately. That results in 101 database queries instead of just 1 or 2 optimized queries. Why is it Dangerous? 1. Increased database load 2. Slower response time 3. Poor scalability under high traffic 4. Hard to detect in small datasets, but disastrous at scale How to Overcome It? 1. Use Join Fetch (Eager Loading) Fetch related entities in a single query using JOINs. 2. Batch Fetching Load related data in chunks instead of one-by-one queries. 3. Entity Graphs (JPA) Define what relationships should be fetched together dynamically. 4. Use DTO Projections Fetch only required fields instead of entire objects. 5. Caching Strategy Leverage second-level cache to reduce repeated DB hits. 6. Monitor SQL Logs Always keep an eye on generated queries during development. Pro Tip: The N+1 problem is not a bug — it’s a design inefficiency. It often comes from default lazy loading behavior in ORMs like Hibernate. Interview Insight: A good engineer doesn’t just make code work — they make it scale efficiently. #Java #SpringBoot #Hibernate #BackendDevelopment #PerformanceOptimization #Microservices #InterviewPrep
To view or add a comment, sign in
-
🚨 𝐓𝐡𝐞 𝐍+1 𝐐𝐮𝐞𝐫𝐲 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 — 𝐓𝐡𝐞 𝐒𝐢𝐥𝐞𝐧𝐭 𝐊𝐢𝐥𝐥𝐞𝐫 𝐨𝐟 𝐁𝐚𝐜𝐤𝐞𝐧𝐝 𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 Most developers don’t notice it… until production latency explodes. 👉 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐢𝐭? You run 1 query to fetch data… Then N additional queries inside a loop to fetch related data. Total queries = 1 + N 💥 𝐖𝐡𝐲 𝐢𝐭’𝐬 𝐝𝐚𝐧𝐠𝐞𝐫𝐨𝐮𝐬 • Latency grows linearly with data size • DB connections get exhausted • Throughput drops under load • Becomes a major bottleneck at scale 🧠 𝐑𝐞𝐚𝐥 𝐞𝐱𝐚𝐦𝐩𝐥𝐞 Fetching 100 users → triggers 101 queries (1 for users + 100 for their orders) Sounds small… until traffic hits. ⚠️ 𝐑𝐨𝐨𝐭 𝐜𝐚𝐮𝐬𝐞 Lazy loading in ORMs (like Hibernate) Accessing relations inside loops without thinking about query execution ✅ 𝐇𝐨𝐰 𝐭𝐨 𝐟𝐢𝐱 𝐢𝐭 • Use JOIN FETCH (fetch in one query) • Use batch fetching (IN queries) • Prefer DTO projections for heavy reads • Monitor queries using logs / tracing tools 🚀 𝐒𝐭𝐚𝐟𝐟-𝐥𝐞𝐯𝐞𝐥 𝐢𝐧𝐬𝐢𝐠𝐡𝐭 N+1 is not just a DB issue. It appears in distributed systems too: API Gateway → 1 service call Then → N downstream service calls Same problem. Bigger impact. 💡 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲 𝐍+1 𝐢𝐬 𝐚 𝐝𝐞𝐬𝐢𝐠𝐧 𝐩𝐫𝐨𝐛𝐥𝐞𝐦, 𝐧𝐨𝐭 𝐣𝐮𝐬𝐭 𝐚 𝐪𝐮𝐞𝐫𝐲 𝐩𝐫𝐨𝐛𝐥𝐞𝐦. If you don’t control your data access pattern, your system will control your latency. #SystemDesign #Backend #Performance #Java #SpringBoot #Databases #Scalability #Tech
To view or add a comment, sign in
-
-
Stop Writing Infinite Repository Methods! 🛑 Use the Criteria Pattern How many times have you seen a Repository or DAO filled with methods like these? findByName(...) findByNameAndStatus(...) findByNameAndStatusAndDateRange(...) This is a maintenance nightmare. Every time the business asks for a new filter, you have to change your Interface and your Implementation. The Solution: Criteria Pattern (Specification) 🔍 The Criteria Pattern allows you to build database queries dynamically. Instead of defining every possible combination of filters beforehand, you create small, atomic "Criteria" objects that represent a single rule (e.g., PriceLessThan, IsActive). Why it changes the game for DB Ops: Dynamic Query Building: You can combine filters at runtime based on what the user actually selects in your UI. Clean Repositories: Your repository only needs one method: find(Criteria criteria). Decoupling: Your business logic defines what to search for, while the Criteria implementation handles the SQL/NoSQL specifics. DRY (Don't Repeat Yourself): Define the "Active Customer" logic once and reuse it across your entire application. How it looks in practice (Spring Data JPA / Hibernate example): Instead of a mess of parameters, you use a Specification API: Java public List<Order> getOrders(String status, Double minAmount) { return orderRepository.findAll( Specification.where(hasStatus(status)) .and(amountGreaterThan(minAmount)) ); } The Result? 📈 A codebase that is elastic. You stop coding "fixed" queries and start building a flexible filtering engine that grows with your product. Is it useful to you? Repost it to your network! ♻️ Do you use the Criteria Pattern in your DAL (Data Access Layer), or are you still sticking to traditional Query Methods? Let's talk about it! 👇 #DatabaseDesign #SQL #CleanCode #BackendDevelopment #SoftwareEngineering #CriteriaPattern #Programming
To view or add a comment, sign in
-
I was building filtering for financial records in my backend. Date range. Category. Amount range. User scope. All optional. All combinable. I started with hardcoded query logic using if-else conditions for different filter cases. It got messy fast. Every new filter meant rewriting existing logic. At one point, the queries looked like they were never meant to be read again. So I scrapped it. I implemented the Specification pattern using Spring Data JPA. Each filter became an isolated, composable predicate. At runtime, only the active ones combine into a single query. No hardcoding. No duplication. Small change in approach. Big impact on scalability and future scope. Now, adding a new filter is just one addition. Existing logic doesn't change. This is the Open/Closed principle from SOLID in practice, open for extension, closed for modification. Each Specification also owns exactly one filter concern. Single Responsibility, naturally enforced. The filtering layer went from something I avoided touching to something I can extend confidently, without regression risk. Interesting how backend complexity shifts as systems grow: performance → security → maintainability. This was firmly the third. #Backend #Java #Maintainability #SOLID #LearningInPublic #SWE
To view or add a comment, sign in
Explore related topics
- How Indexing Improves Query Performance
- How to Improve NOSQL Database Performance
- Tips for Database Performance Optimization
- How to Improve Code Performance
- How to Optimize Postgresql Database Performance
- How to Optimize Query Strategies
- How to Improve Database Interaction
- Optimizing Large Data Queries in Salesforce
- How to Improve Array Iteration Performance in Code
- Tips for Optimizing App Performance Testing
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development