🚀 Java Streams in Action: Partitioning Data ! 👉 Partition employees into two groups - one earning above ₹50,000 and the others. i. Have you ever needed to split data into two groups based on a condition? ii. Here’s a simple example using Java Streams to partition employees based on salary. 🔍 Approach 👉 stream() Converts the list into a stream to perform functional-style operations. 👉 filter(Objects::nonNull) Removes any null objects from the list to avoid NullPointerException. 👉 collect(...) Terminal operation that transforms the stream into a collection (in this case, a Map). 👉 Collectors.partitioningBy(...) This is the key part 🔥 It splits the data into two groups based on a condition: true -> Employees earning more than ₹50,000 false -> Employees earning ₹50,000 or less ✔ Automatically groups into two categories ✔ Ideal for binary conditions (true/false) 📊 Output Structure true -> List of high-salary employees false -> List of other employees - Use partitioningBy when your condition results in only two groups. - If you need multiple groups (like department-wise), go for groupingBy. 💻 I’ve added my Java solution in the comments below. Please let me know if there are any other approaches I could try. #Java #JavaStreams #CodingInterview #BackendDevelopment #SpringBoot #Developers
Partitioning Data with Java Streams
More Relevant Posts
-
📌 Collectors in Java Streams — Transforming Data Efficiently Collectors are used with streams to transform and gather results into collections or other structures. They are mainly used with: collect() — a terminal operation --- 1️⃣ What is collect()? collect() converts a stream into a final result like: • List • Set • Map • Grouped data Example: List<Integer> list = stream.collect(Collectors.toList()); --- 2️⃣ Common Collectors 🔹 toList() Convert stream to List list.stream() .collect(Collectors.toList()); --- 🔹 toSet() Removes duplicates list.stream() .collect(Collectors.toSet()); --- 🔹 toMap() Convert to Map list.stream() .collect(Collectors.toMap( key -> key.getId(), value -> value )); --- 3️⃣ groupingBy (Very Important) Groups elements based on a key Example: Map<String, List<Employee>> map = employees.stream() .collect(Collectors.groupingBy( e -> e.getDepartment() )); --- 4️⃣ counting() Counts elements long count = list.stream() .collect(Collectors.counting()); --- 5️⃣ joining() Joins strings String result = list.stream() .collect(Collectors.joining(", ")); --- 6️⃣ Why Collectors Are Powerful ✔ Transform data easily ✔ Replace complex loops ✔ Enable grouping and aggregation ✔ Improve readability --- 🧠 Key Takeaway Collectors turn streams into meaningful results. They are essential for data transformation and aggregation. #Java #Java8 #Streams #Collectors #BackendDevelopment
To view or add a comment, sign in
-
One trace. 572,789 spans. 62% of all trace data in a five-minute sample, from a single service. This was a batch processing job. The #OpenTelemetry Java auto-instrumentation agent created a span for every database call inside a loop. The agent does not have a built-in span count cap per trace. It instruments what it finds, and a batch job iterating over hundreds of thousands of records will produce hundreds of thousands of spans. The trace was unusable. No backend renders half a million spans in a waterfall view. The cost was astronomical, and the four largest traces in the sample had no root span metadata, suggesting missing or disconnected parent spans. Most originated from batch or scheduled tasks like `https://lnkd.in/dBWthabm`. This organization runs 3,532 services, all on Java auto-instrumentation v1.33.6. The agent works well for request-response services. It was never designed for uncapped iteration over data. The fix depends on the batch pattern. For jobs that process items independently, use span links instead of parent-child relationships. Each item gets its own trace, linked back to the batch trace. This keeps individual traces small and queryable while preserving the connection to the batch context. For specific instrumentation that generates noise, the agent supports suppression flags. Setting `otel.instrumentation.common.experimental.suppress-messaging-receive-spans=true` eliminates receive spans for messaging consumers. Similar flags exist for JDBC, Redis, and other libraries. Review which instrumentations fire inside your loops and suppress the ones that add volume without insight. Auto-instrumentation assumes your services handle requests. When your workload does not fit that model, you need guardrails. The agent will not set them for you. And about last Friday's quiz: what does `stability: stable` mean for an OpenTelemetry semantic convention attribute? The answer is that it follows semver deprecation rules. A stable attribute is not frozen. It can still be deprecated, but the project must provide a migration path and maintain backward compatibility for a defined period. An `experimental` attribute carries no such guarantee and might be renamed or removed between releases. If you build dashboards or code generation around an experimental attribute, you accept the risk of breakage on upgrade.
To view or add a comment, sign in
-
🚀 Evolution of Database Interaction in Java (🔧➡️⚙️➡️🚀) It’s fascinating how the “most natural way” to work with databases has evolved over time 👇 🔹 JDBC You write everything manually—queries, connections, result parsing. Full control, but a lot of boilerplate. 🔹 Hibernate ORM We move to object mapping. Less SQL, more Java objects. Cleaner, but still requires configuration and understanding ORM behavior. 🔹 Spring Boot with JPA Things get easier. Auto-configuration, reduced setup, better integration. Focus shifts more toward business logic. 🔹 Spring Data JPA (Repository methods) 🤯 Now this feels like magic! Define methods → Framework generates queries → Minimal SQL needed. 👉 From writing complex SQL to just defining method names… we’ve come a long way. 💡 But here’s the reality: Every layer matters. Understanding JDBC and SQL makes you a stronger developer—even when using high-level abstractions. 📌 Abstraction reduces effort, but fundamentals build mastery. What’s your preferred way of interacting with databases? 👇 #Java #SpringBoot #JPA #Hibernate #BackendDevelopment #SoftwareEngineering #LearningJourney
To view or add a comment, sign in
-
-
🚀 Day 4 of My Advanced Java Journey – PreparedStatement in JDBC Today, I learned one of the most important concepts in JDBC — PreparedStatement, which makes database operations more secure and efficient. 🔹 What is PreparedStatement? A PreparedStatement is used to execute SQL queries with dynamic values using placeholders (?). It helps in writing cleaner, reusable, and secure database code. 🔹 Steps to Use PreparedStatement 1️⃣ Load the Driver Load the JDBC driver class. 2️⃣ Establish Connection Connect to the database using URL, username, and password. 3️⃣ Create PreparedStatement Write SQL query with placeholders (?): String query = "INSERT INTO employee (id, name, desig, salary) VALUES (?, ?, ?, ?)"; PreparedStatement pstmt = con.prepareStatement(query); 4️⃣ Set Parameter Values Assign values using setter methods: pstmt.setInt(1, id); pstmt.setString(2, name); pstmt.setString(3, desig); pstmt.setInt(4, salary); 5️⃣ Execute Query int rows = pstmt.executeUpdate(); 🔹 Batch Processing (Multiple Inserts) Used to insert multiple records efficiently in one go. do { pstmt.setInt(1, scan.nextInt()); pstmt.setString(2, scan.next()); pstmt.setString(3, scan.next()); pstmt.setInt(4, scan.nextInt()); pstmt.addBatch(); System.out.println("Add more? (yes/no)"); s = scan.next(); } while(s.equalsIgnoreCase("yes")); int[] result = pstmt.executeBatch(); 🔹 Important Methods setInt(), setString(), setFloat() → Set values executeUpdate() → Insert/Update/Delete addBatch() → Add queries to batch executeBatch() → Execute all at once 🔍 What I explored beyond the session PreparedStatement prevents SQL Injection attacks 🔐 Precompiled queries improve performance Difference between Statement and PreparedStatement Importance of closing resources (Connection, PreparedStatement) Using try-with-resources for better memory management 💡 PreparedStatement is a must-know concept for writing secure and optimized database applications in Java. 🙌 Special thanks to the amazing trainers at TAP Academy: kshitij kenganavar Sharath R MD SADIQUE Bibek Singh Vamsi yadav Hemanth Reddy Harshit T Ravi Magadum Somanna M G Rohit Ravinder TAP Academy 📌 Learning in public. Improving every single day. #Java #AdvancedJava #JDBC #PreparedStatement #BackendDevelopment #LearningInPublic #VamsiLearns
To view or add a comment, sign in
-
-
𝗪𝗵𝘆 𝗝𝗮𝘃𝗮 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀 𝘀𝘁𝗿𝘂𝗴𝗴𝗹𝗲 𝘄𝗶𝘁𝗵 𝗦𝗤𝗟 Most Java developers don’t fail at SQL because it’s hard. They fail because they think row-by-row, not set-based. 𝗜𝗻 𝗝𝗮𝘃𝗮: 𝘧𝘰𝘳(𝘜𝘴𝘦𝘳 𝘶𝘴𝘦𝘳 : 𝘶𝘴𝘦𝘳𝘴){ 𝘪𝘧(𝘶𝘴𝘦𝘳.𝘨𝘦𝘵𝘈𝘨𝘦() > 18){ 𝘳𝘦𝘴𝘶𝘭𝘵.𝘢𝘥𝘥(𝘶𝘴𝘦𝘳); } } 𝗜𝗻 𝗦𝗤𝗟: 𝘚𝘌𝘓𝘌𝘊𝘛 * 𝘍𝘙𝘖𝘔 𝘶𝘴𝘦𝘳𝘴 𝘞𝘏𝘌𝘙𝘌 𝘢𝘨𝘦 > 18; 𝗖𝗼𝗿𝗲 𝗖𝗼𝗻𝗰𝗲𝗽𝘁: SQL is not iteration. It’s declarative thinking — you describe what, not how. 𝗔𝗻𝗮𝗹𝗼𝗴𝘆: Java = cooking step-by-step SQL = ordering the final dish 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: If you try to “convert loops into SQL,” you’ll stay average. Learn to think in sets, not steps. 𝗖𝗧𝗔: What was harder for you—loops or SQL thinking? #Java #SQL #BackendDevelopment #ProgrammingMindset #Developers
To view or add a comment, sign in
-
Background jobs fail silently. 😶 Your user clicks "Generate Report". The API returns 202 Accepted in milliseconds. 15 minutes later... nothing in their inbox. Without observability, the answer is: we have no idea what went wrong. That's why we wrote a deep-dive on making Java background jobs fully observable: 👉 Structured logging with SLF4J's MDC for context that follows the job 👉 Micrometer metrics for throughput, duration, and error rates 👉 OpenTelemetry traces to pinpoint exactly where time is spent And how JobRunr handles all of this natively, from the built-in dashboard to automatic MDC propagation and Micrometer integration. https://lnkd.in/ehFyvgiR
To view or add a comment, sign in
-
🚀43/100 - N+1 Problem in Hibernate If you’re working with JPA/Hibernate, you are likely facing this without realizing it. What is N+1 Problem 🔸1 query to fetch parent data (Department) 🔸N queries to fetch child data (Employees) 🔸Total → N + 1 queries 🔸Fetching 5 departments → triggers 6 queries Why does this happen 🔸Root cause = Lazy Loading 🔸Hibernate does NOT fetch related data initially 🔸It loads a proxy (placeholder) 🔸When accessed → fires a query 🔸Access inside loop → multiple queries → N+1 Where it usually happens 🔹OneToMany relationships 🔹Looping over entities 🔹Returning entities directly in APIs 🔹Nested object access Why this is a problem 🔸Multiple DB hits → performance degradation 🔸Increased latency 🔸Heavy load on database 🔸Not scalable for large datasets How to solve this Spring Boot Approach (Recommended) 🔸@EntityGraph(attributePaths = "employees") 🔸Forces single query fetch Hibernate / JPQL Approach 🔸JOIN FETCH Example: SELECT d FROM Department d JOIN FETCH d.employees Other Approaches 🔸DTO Projection → fetch only req data 🔸Batch Fetching → reduces queries 🔸Avoid blind EAGER loading Key takeaway 🔹Lazy loading is not bad 🔹Lazy loading + loop = N+1 problem 🔹Always control how data is fetched I’ve created a complete backend developer guide covering: 🔸What is N+1 problem (with examples) 🔸Why it happens (deep dive - lazy loading) 🔸Real code (Entity, Repo,Service,Controller) 🔸With vs Without N+1 execution 🔸EntityGraph vs Fetch Join 🔸Multiple optimization approaches 🔸Diagrams for clear understanding Worth going through if you're preparing for backend interviews or building scalable APIs. Save & Repost🔁 if this helps someone preparing seriously Follow Surya Mahesh Kolisetty for more backend and Java deep dives #Java #SpringBoot #Hibernate #BackendDevelopment #JPA #SystemDesign #Performance #InterviewPreparation #SoftwareEngineering #CodingInterview #Developers #TechLearning #CFBR #SystemDesign #MicroServices
To view or add a comment, sign in
-
💻 JDBC in Java — Connecting Code with Databases 🚀 Every backend application needs to store, retrieve, and manage data — that’s where JDBC (Java Database Connectivity) comes in 🔥 This visual breaks down the complete JDBC flow with a practical example 👇 🧠 What is JDBC? JDBC is a Java API that allows applications to connect and interact with databases. 👉 It acts as a bridge between Java application ↔ Database 🔄 JDBC Workflow (Step-by-Step): 1️⃣ Load & Register Driver 2️⃣ Establish Connection 3️⃣ Create Statement 4️⃣ Execute Query 5️⃣ Process Result 6️⃣ Close Connection 🔍 Core Components: ✔ DriverManager → Manages database drivers ✔ Connection → Establishes connection ✔ Statement → Executes SQL queries ✔ PreparedStatement → Parameterized queries (secure 🔐) ✔ ResultSet → Holds query results ⚡ Basic Example: Connection con = DriverManager.getConnection(url, user, pass); PreparedStatement ps = con.prepareStatement( "SELECT * FROM students WHERE id = ?" ); ps.setInt(1, 1); ResultSet rs = ps.executeQuery(); while(rs.next()) { System.out.println(rs.getString("name")); } 🚀 Types of JDBC Drivers: Type 1 → JDBC-ODBC Bridge Type 2 → Native API Type 3 → Network Protocol Type 4 → Thin Driver (Most used ✅) 💡 Why use PreparedStatement? ✔ Prevents SQL Injection ✔ Improves performance ✔ Safer for dynamic queries ⚠️ Best Practices: ✔ Always close resources (Connection, Statement, ResultSet) ✔ Use try-with-resources ✔ Handle exceptions properly ✔ Prefer PreparedStatement over Statement 🎯 Key takeaway: JDBC is not just about running queries — it’s the foundation of how Java applications communicate with databases efficiently and securely. #Java #JDBC #Database #BackendDevelopment #Programming #SoftwareEngineering #Coding #100DaysOfCode #Learning
To view or add a comment, sign in
-
-
“I wasted hours writing JDBC code before I understood this.” If you’ve used JDBC, you know the drill: Open connection Create statement Execute query Loop through ResultSet Manually map data Close everything (and hope nothing breaks) Now imagine doing this for every table in a real project. 👉 That’s not scalable. 👉 That’s exactly why Hibernate exists. 💡 𝗧𝗵𝗲 𝗥𝗲𝗮𝗹 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 JDBC is not “bad” — it’s just too low-level. 𝗬𝗼𝘂 𝗮𝗿𝗲 𝗳𝗼𝗿𝗰𝗲𝗱 𝘁𝗼: - Write repetitive boilerplate code - Manually convert table data → Java objects - Handle exceptions and resource management yourself 👉 This slows you down and increases bugs. 🔥 𝗪𝗵𝗮𝘁 𝗛𝗶𝗯𝗲𝗿𝗻𝗮𝘁𝗲 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗦𝗼𝗹𝘃𝗲𝘀 Hibernate is an ORM (Object Relational Mapping) tool. 𝘐𝘯𝘴𝘵𝘦𝘢𝘥 𝘰𝘧 𝘵𝘩𝘪𝘯𝘬𝘪𝘯𝘨 𝘪𝘯 𝘵𝘢𝘣𝘭𝘦𝘴 𝘢𝘯𝘥 𝘳𝘰𝘸𝘴: 👉 𝘠𝘰𝘶 𝘸𝘰𝘳𝘬 𝘸𝘪𝘵𝘩 𝘑𝘢𝘷𝘢 𝘰𝘣𝘫𝘦𝘤𝘵𝘴 𝗛𝗶𝗯𝗲𝗿𝗻𝗮𝘁𝗲 𝗵𝗮𝗻𝗱𝗹𝗲𝘀: * Mapping objects ↔ database tables * Generating SQL queries * Managing connections and transactions 🧠 𝗧𝘄𝗼 𝗖𝗼𝗻𝗰𝗿𝗲𝘁𝗲 𝗦𝗶𝘁𝘂𝗮𝘁𝗶𝗼𝗻𝘀 👉 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 𝟭: You fetch 100 employees 𝗝𝗗𝗕𝗖: Loop + manually map each column 𝗛𝗶𝗯𝗲𝗿𝗻𝗮𝘁𝗲: List<Employee> directly 👉 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 𝟮: You insert a record 𝗝𝗗𝗕𝗖: Write INSERT query + set parameters 𝗛𝗶𝗯𝗲𝗿𝗻𝗮𝘁𝗲: Save/persist the object 🎯 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆 Hibernate is not about “less code.” 👉 It’s about focusing on business logic instead of database plumbing If you don’t understand why Hibernate exists, You’ll misuse it and create worse performance than JDBC. Day 2 → I’ll break down ORM deeply (and where it fails in real projects) #Java #Hibernate #SpringBoot #OpenToWork #BackendDevelopment #ImmediateJoiner
To view or add a comment, sign in
-
-
Lazy vs Eager Loading — The N+1 Problem Every Java Dev Must Know One of the most common performance killers in Spring Data JPA is the infamous N+1 query problem — and it all starts with how you configure your fetch strategy. By default, @OneToMany and @ManyToMany use LAZY loading — related data is fetched only when accessed. Meanwhile, @ManyToOne and @OneToOne default to EAGER — data is loaded immediately with the parent. The trap? When you load a list of 100 customers and then access their orders in a loop: List<Customer> customers = customerRepo.findAll(); // 1 query for (Customer c : customers) { c.getOrders().size(); // 100 more queries! } // Total: 101 SQL queries = performance disaster Solutions that actually work: // 1. JOIN FETCH in JPQL @Query("SELECT c FROM Customer c JOIN FETCH c.orders") List<Customer> findAllWithOrders(); // 2. @EntityGraph @EntityGraph(attributePaths = {"orders"}) List<Customer> findAll(); Rule of thumb: ✅ Keep FetchType.LAZY as default ✅ Use JOIN FETCH or @EntityGraph when you know you need related data ✅ Enable Hibernate SQL logging to detect N+1 early ❌ Never switch everything to EAGER — that's trading N+1 for over-fetching #Java #SpringBoot #BackendDevelopment #SpringDataJPA #Performance #LearningInPublic
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
public class StreamPractice { public static void main(String[] args) { List<Employee> listOfEmployees = List.of(new Employee("Suresh", "IT", 30000), new Employee("Ramesh", "IT", 45000), new Employee("Amit", "HR", 28000), new Employee("Neha", "HR", 34000), new Employee("Kiran", "BPO", 25000), new Employee("John", "Admin", 550000), new Employee("Suresh", "IT", 30000), new Employee("Amit", "HR", 29000), new Employee("Ruresh", "FieldWork", 350000), new Employee("Unknown", null, 50000)); //Partition employees into two groups - one earning above ₹50,000 and the other below. Map<Boolean, List<Employee>> map = listOfEmployees.stream().filter(Objects::nonNull) .collect(Collectors.partitioningBy(e -> e.getSalary() > 50000)); System.out.println("---------Employees with salary > 50000---------"); map.get(Boolean.TRUE).forEach(e -> System.out.println(e)); System.out.println("---------Employees with salary <= 50000--------"); map.get(Boolean.FALSE).forEach(e -> System.out.println(e)); } }