Ever tried building a “global filter API” by joining multiple datasets into a single response? Sounds simple… until it isn’t. Recently, I worked on combining data from multiple sources into one API using native SQL joins. On paper, it looked efficient — one query, one response. Reality was different. ⚠️ Challenges I faced: LEFT JOIN created duplicate and bloated rows SELECT * caused column order mismatches during DTO mapping Handling array fields from DB to Java was tricky Inconsistent data types across sources (BigDecimal vs Double, Timestamp vs LocalDateTime) Trying to map everything into a single DTO led to tight coupling The biggest pain: splitting combined query results back into meaningful structures 💡 Key learnings: Avoid SELECT * in complex joins — always map explicitly Native queries + DTO mapping = order matters more than you think One “global” response is not always a good design Sometimes, separate APIs or structured responses are cleaner and scalable Debugging mapping issues can take more time than writing the query itself In the end, what seemed like a query problem turned out to be a design problem. How do you handle multi-source joins in your APIs? 🤔 #Java #SpringBoot #BackendDevelopment #SQL #DatabaseDesign #APIDesign #Microservices #SoftwareEngineering #CodingChallenges #Developers #TechLearning #CleanCode
Combining Multiple Datasets with SQL Joins in APIs
More Relevant Posts
-
🚀 DSL vs @Query in Spring Data JPA While working with Spring Data JPA, I learned that by default it provides methods to work with the primary key like findById(). But what if we want to fetch data using other fields like name, age, etc.? 🤔 We have two approaches 👇 🔹 1. Domain-Specific Language (DSL) List<User> findByName(String name); ✔️ Method Naming Convention ✔️ Query is automatically generated ✔️ Easy to write and read ✔️ Best for simple queries 🔹 2. @Query Annotation @Query("SELECT u FROM User u WHERE u.name = :name") List<User> getUserByName(String name); ✔️ Query is written manually (JPQL/SQL) ✔️ More flexibility ✔️ Best for complex queries (joins, multiple conditions) 💡 Key Difference: DSL → Simple & automatic @Query → Flexible & customizable 🎯 Conclusion: Use DSL for quick and simple queries, and switch to @Query when you need more control. #Java #SpringBoot #SpringDataJPA #BackendDevelopment #Coding #Developers #Learning
To view or add a comment, sign in
-
-
The N+1 Query Problem — A Silent Performance Killer In one of my recent backend discussions, we revisited a classic issue that often goes unnoticed during development but can severely impact performance in production — the N+1 Query Problem. What is the N+1 Problem? It occurs when your application executes: 1 query to fetch a list of records (N items) Then executes N additional queries to fetch related data for each record Total = 1 + N queries Example Scenario: You fetch a list of 100 users, and for each user, you fetch their orders separately. That results in 101 database queries instead of just 1 or 2 optimized queries. Why is it Dangerous? 1. Increased database load 2. Slower response time 3. Poor scalability under high traffic 4. Hard to detect in small datasets, but disastrous at scale How to Overcome It? 1. Use Join Fetch (Eager Loading) Fetch related entities in a single query using JOINs. 2. Batch Fetching Load related data in chunks instead of one-by-one queries. 3. Entity Graphs (JPA) Define what relationships should be fetched together dynamically. 4. Use DTO Projections Fetch only required fields instead of entire objects. 5. Caching Strategy Leverage second-level cache to reduce repeated DB hits. 6. Monitor SQL Logs Always keep an eye on generated queries during development. Pro Tip: The N+1 problem is not a bug — it’s a design inefficiency. It often comes from default lazy loading behavior in ORMs like Hibernate. Interview Insight: A good engineer doesn’t just make code work — they make it scale efficiently. #Java #SpringBoot #Hibernate #BackendDevelopment #PerformanceOptimization #Microservices #InterviewPrep
To view or add a comment, sign in
-
This week I solved Group Anagrams in DSA. And I learned CTEs in SQL. Two completely different topics. But they taught me the same thing. Grouping related data together is a core skill in backend engineering. Let me explain. Group Anagrams — the DSA problem: You get a list of words. Your job — group words that are anagrams of each other. "eat", "tea", "tan", "ate", "nat", "bat" → ["eat", "tea", "ate"] belong together → ["tan", "nat"] belong together → ["bat"] stands alone The key insight? Sort each word → use it as a HashMap key → group all matching words under that key. Related data grouped under one key. ✅ CTE in SQL — the same idea, different world: CTE stands for Common Table Expression. It lets you give a name to a complex subquery — and reuse it multiple times in the same query. Instead of this messy nested query: SELECT name FROM users WHERE id IN (SELECT user_id FROM orders WHERE total > (SELECT AVG(total) FROM orders)) You write this clean version: WITH avg_order AS ( SELECT AVG(total_amount) AS avg FROM orders ), high_value AS ( SELECT user_id FROM orders, avg_order WHERE total_amount > avg_order.avg ) SELECT name FROM users WHERE id IN (SELECT user_id FROM high_value); Same result. Completely readable. Easy to debug. The pattern in both? Break a complex problem into named steps. Group related logic together. Make it readable for the next person. That is not just DSA or SQL thinking. That is backend engineering thinking. Same concept. Two different languages. One mindset. Still learning. Still connecting the dots every day. 💪 What concept clicked for you when you saw it in two different places? #DSA #SQL #BackendDevelopment #Java #SpringBoot #BuildInPublic #JavaDeveloper #CareerTransition
To view or add a comment, sign in
-
Ever felt like SQL is doing more work than your actual logic? Writing queries… mapping results… handling connections… It gets repetitive really fast. What if you could just work with objects instead? That’s where 𝗢𝗥𝗠 (𝗢𝗯𝗷𝗲𝗰𝘁 𝗥𝗲𝗹𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗠𝗮𝗽𝗽𝗶𝗻𝗴) comes in. Instead of writing complex SQL queries, you interact with your database using Java objects. No more manually converting 𝗿𝗼𝘄𝘀 → 𝗼𝗯𝗷𝗲𝗰𝘁𝘀. ORM does it for you. Let’s simplify it Without ORM: • Write SQL queries • Execute them • Map ResultSet to objects manually With ORM: • Create a class • Map it to a table • Call methods like save(), findById() That’s it. Popular ORM frameworks we might know: 𝗛𝗶𝗯𝗲𝗿𝗻𝗮𝘁𝗲 (most widely used) 𝗝𝗣𝗔 (Java Persistence API – standard) Why developers love ORM: • Faster development (less boilerplate) • Focus on business logic, not SQL • Automatic mapping between objects & tables • Easy integration with Spring Boot But here’s the real insight ORM is powerful… but blindly using it can hurt performance. Sometimes, writing a custom query is still the better choice. So the goal isn’t to avoid SQL… It’s to use 𝗢𝗥𝗠 𝘀𝗺𝗮𝗿𝘁𝗹𝘆. Next time you call 𝘀𝗮𝘃𝗲() in your project, remember — there’s a lot happening behind the scenes. #CoreJava #JavaDeveloper #Spring #Framework #ORM #SpringBoot #SpringData #Hibernate #SoftwareDevelopment #SQL #BackEndDevelopment #MicroServices #Programming #aswintech #Database #BuildBetter
To view or add a comment, sign in
-
I spent an hour debugging a query. The values were correct. The timestamps looked right. Nothing crashed. It was still returning wrong results. The change that caused it looked minimal. Before: BETWEEN current_timestamp - interval '25' hours AND from_unixtime(exec_ts/1000) - interval '1' hour After: BETWEEN from_unixtime(prev_ts/1000) AND from_unixtime(exec_ts/1000) - interval '1' hour I ran both functions in the SQL console - they printed the same types. I checked the Trino docs: from_unixtime() returns timestamp(3). Looked fine. But we run in Spark SQL. Different dialect. In Spark, from_unixtime() returns a string. So BETWEEN was comparing a string against a timestamp. No error thrown. Just wrong results. Nothing crashed. Nothing looked obviously broken. That was the whole problem. Useful reminder: when behavior stops making sense, check the real runtime types in the runtime you actually use - not the docs for a different dialect. Shortest path to the bug is often: values → types → coercion rules. That was the moment I remembered {} + [] + {} + [1] JavaScript territory. SQL does the same. Just without the memes.
To view or add a comment, sign in
-
📦 DTO vs Entity — A Small Concept That Makes a Big Difference When I started building APIs, I used to send Entity objects directly in responses… Big mistake ❌ Let’s understand the right approach 👇 👉 Entity Represents database table Directly mapped using JPA/Hibernate Contains full data structure 👉 DTO (Data Transfer Object) Used to send data between client & server Contains only required fields Improves security & performance 💡 Why not expose Entity directly? ❌ Exposes sensitive data (like passwords) ❌ Tight coupling with database structure ❌ Hard to manage changes 🚀 What I do now: ✔ Use Entity → for database ✔ Use DTO → for API request/response ✔ Convert using Mapper (manual / MapStruct) 🧠 Key Insight: Good backend design is about controlling what data you expose. Consistency in learning API design = cleaner & secure systems 💪 Are you using DTOs in your projects or still returning Entities? #Java #SpringBoot #DTO #BackendDevelopment #CleanCode #FullStackDeveloper
To view or add a comment, sign in
-
-
🚀 Excited to share something I’ve been working on! I’ve built a production-ready logging library for Python designed to be powerful, flexible, and super easy to use. 🔧 Key Highlights: - ⚡ Zero/low configuration setup - 🧩 Fully configurable via YAML, environment variables, or function parameters - 🎨 Clean & structured logging (console + file) - 🔄 Log streaming support for real-time processing - 🔐 Built-in sensitive data masking - 📦 Supports Kafka streaming with Avro serialization 💡 Why Avro? Using Avro enables compact binary serialization, ensures schema evolution, and provides high performance for streaming pipelines — making logs more efficient and scalable in distributed systems. Whether you're building microservices or large-scale systems, this library is designed to fit seamlessly into production environments. 🔗 Check it out here: https://lnkd.in/g2YzGdgd 🔗 Source Code: https://lnkd.in/gJZhFJP9 Would love to hear your feedback and suggestions! #Python #Logging #BackendDevelopment #Kafka #DistributedSystems #OpenSource #SoftwareEngineering
To view or add a comment, sign in
-
When “Correct” Queries Don’t Return Correct Results. Recently, I ran into an interesting backend issue where a jpa query was logically correct, the database data was correct, yet some expected results were missing. At first, everything looked fine: The filters were valid The relationships made sense The data existed in the database But the output didn’t match expectations. What was really happening: The issue came from how the ORM translates relationships between entities into SQL. In some cases: A simple condition on a related entity can silently change the type of SQL join being generated Instead of keeping all records and checking conditions safely, the query can become restrictive This can unintentionally exclude valid rows before filtering even happens So even though the logic was correct, the execution plan was not what I expected. The fix: The solution was to make the relationship handling explicit instead of relying on implicit behavior. By explicitly controlling how related data is joined (for example using a LEFT JOIN instead of relying on automatic joins), the query became predictable again and all valid results were correctly included. Key lesson: In ORM-based systems (like JPA/Hibernate): The way you write a condition is not always how it executes Relationships can change query behavior silently Small assumptions can lead to missing or incomplete data Takeaway: Always validate not only what your query says, but how your ORM translates it into SQL. Because sometimes, the issue is not the data or logic… but the hidden behavior behind the abstraction. #SoftwareEngineering #BackendDevelopment #ORM #Hibernate #SpringBoot #SystemDesign #Debugging #Programming #LessonsLearned
To view or add a comment, sign in
-
🚀 Day 20/100: Data Types Deep Dive – Precision, Size & Memory 📊🧠 Today’s learning focused on the science behind data storage in Java. Writing efficient code is not just about logic—it’s about choosing the right data type to optimize memory usage and performance. Here’s a structured breakdown of what I explored: 🏗️ 1. Primitive Data Types – The Core Building Blocks These are predefined types that store actual values directly in memory. 🔢 Numeric (Whole Numbers): byte → 1 byte | Range: -128 to 127 short → 2 bytes | Range: -32,768 to 32,767 int → 4 bytes | Standard integer type long → 8 bytes | Used for large values (L suffix) 🔢 Numeric (Floating-Point): float → 4 bytes | Requires f suffix double → 8 bytes | Default for decimal values 🔤 Non-Numeric: char → 2 bytes | Stores a single Unicode character boolean → JVM-dependent | Represents true or false 🏗️ 2. Non-Primitive Data Types – Reference Types These types store references (memory addresses) rather than actual values: String → Sequence of characters Array → Collection of similar data types Class & Interface → Blueprint for objects 💡 Unlike primitives, their default value is null, and they reside in Heap memory, with references stored in the Stack. 🧠 Key Insight: Primitives → Store actual values (Stack memory) Non-Primitives → Store references to objects (Heap memory) ⚙️ Why This Matters: Choosing the correct data type improves: ✔️ Memory efficiency ✔️ Application performance ✔️ Code reliability at scale 📈 Today reinforced that strong fundamentals in data types are essential for writing optimized, production-ready Java applications. #Day20 #100DaysOfCode #Java #Programming #MemoryManagement #DataTypes #SoftwareEngineering #CodingJourney #JavaDeveloper #10000Coders
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development