🚀 Today I went through an interesting backend concept — HashMap Collision (Java) ❓ Question Why do collisions happen in HashMap, and how does Java handle them (Java 8)? --- ⚠️ The Problem — What is Collision? HashMap internally stores data in an array of buckets. Whenever a key is inserted, Java calculates: index = hash(key) % array_size Since the array size is limited (16, 32, 64…), different keys can sometimes generate the same index. Example: hash("CAT") = 18 hash("DOG") = 10 array_size = 8 18 % 8 = 2 10 % 8 = 2 Both keys go to index 2 → Collision occurs. --- 🧠 Understanding Hash A hash is simply a numeric value generated from a key using a hash function, helping the system quickly decide where the data should be stored. --- ✅ The Solution — How Java 8 Handles Collisions When multiple keys map to the same bucket: 1️⃣ First entry is stored normally 2️⃣ Next entries are stored using a LinkedList in that bucket 3️⃣ If entries become large (≈ 8), LinkedList converts into a Red-Black Tree for faster search Why this matters: - LinkedList search → O(n) - Tree search → O(log n) So even with many collisions, performance remains efficient. --- 📌 Final Insight HashMap performance depends heavily on hashing + bucket indexing. Collisions are normal, but Java smartly optimizes them using LinkedList → Tree conversion to maintain speed. #BackendLearning #Java #HashMap #SpringBoot #SoftwareEngineering #LearningInPublic #JavaDeveloper #cfbr
Subham Mohanty’s Post
More Relevant Posts
-
The Hidden Mechanism Behind ThreadLocal in Java ThreadLocal is often explained simply as: “Data stored per thread.” That’s true — but the interesting part is how it actually works internally. Most developers think the data lives inside ThreadLocal. It doesn’t. How ThreadLocal Works Internally Each Thread object maintains its own internal structure: Thread └── ThreadLocalMap ├── ThreadLocal → Value ├── ThreadLocal → Value The important detail: The map belongs to the Thread, not to ThreadLocal. ThreadLocal simply acts as a key. Basic Flow When you call: ThreadLocal.set(value) Internally: Copy code thread = currentThread map = thread.threadLocalMap map.put(ThreadLocal, value) When you call: ThreadLocal.get() It retrieves the value from the current thread’s map. Each thread therefore has its own independent copy. Where This Is Used in Real Systems You’ll find ThreadLocal used in many frameworks: • Spring Security → SecurityContextHolder • Transaction management → TransactionSynchronizationManager • Logging correlation IDs • Request scoped context It allows frameworks to store request-specific data without passing it through every method. The Hidden Danger If you forget to call: Copy code ThreadLocal.remove() You can create memory leaks. Why? Because thread pools reuse threads. Old values may remain attached to long-lived threads. ThreadLocal is simple conceptually. But its internal design is what makes many Java frameworks work efficiently. Have you used ThreadLocal in production code? #Java #CoreJava #Multithreading #ThreadLocal #SpringBoot #BackendEngineering #InterviewPreparation
To view or add a comment, sign in
-
A small Java mistake that wasted hours — but taught a solid lesson Today I hit a bug that looked impossible at first. Data was present in the DB ✅ API worked the first time ✅ After some calls, response started returning null ❌ Logs, alerts, queries — everything looked correct After checking DB and JPA logic thoroughly, the real issue was… one line of code. if (senderId != receiverId) { // fetch data } Looks harmless, right? But this is where things broke. What went wrong In Java, != compares object references, not values. So when senderId and receiverId are: Long Integer String any wrapper/object type The condition may behave inconsistently depending on JVM object creation and caching. Same value ≠ same reference → condition fails → query not executed → null response. That’s why it worked once and failed later. The fix if (!Objects.equals(senderId, receiverId)) { // fetch data } Compares values Null-safe Predictable behavior Key takeaway Never use == or != for value comparison on objects in Java. This wasn’t: a DB issue a JPA issue a query issue It was a Java fundamentals issue showing up at runtime. Small mistakes like this are easy to miss — and that’s exactly why understanding core language behavior still matters, even with modern frameworks. Every backend developer hits this once. The important part is learning why it happened. #Java #SpringBoot #BackendDevelopment #Debugging #CleanCode #LearningInPublic
To view or add a comment, sign in
-
Day 23 — LeetCode Progress (Java) Problem: Search in Rotated Sorted Array Required: Given a sorted array that has been rotated at some pivot unknown beforehand, search for a target value in O(log n) time. If found, return its index. Otherwise, return -1. Brute force O(n) is not acceptable — the constraint forces binary search thinking. Idea: Even though the array is rotated, at least one half of the array (left or right) is always sorted. At every step of binary search: Either left → mid is sorted Or mid → right is sorted If we can detect which half is sorted, we can check whether the target lies inside that sorted range and eliminate the other half. The rotation doesn’t break ordering — it just splits it into two sorted segments. Approach: Initialize left = 0, right = n - 1 While left ≤ right: Compute mid If nums[mid] == target → return mid Check if left half is sorted (nums[left] ≤ nums[mid]) • If target lies within that range → move right • Else → move left Otherwise right half must be sorted • If target lies within that range → move left • Else → move right If loop ends → return -1 Time Complexity: O(log n) Space Complexity: O(1) Core Insight: Binary search isn’t about sorted arrays. It’s about partially ordered structure. If you can detect order in fragments, you can still cut the search space in half. #LeetCode #DSA #Java #BinarySearch #CodingJourney
To view or add a comment, sign in
-
-
Heap Memory and Stack Memory: What’s the Difference? Heap Memory: -> The heap is a common storage area used by all threads and all classes within a program. -> It stores any objects created by the program, and also stores Java primitives defined as instance variables. -> The heap is an area of JVM-managed memory. -> The heap is cleaned regularly by the GC. Stack Memory: -> The purpose of Stack memory is to retain context for each active method within each thread. -> Local variables defined as primitives are stored in the stack. -> The stack stores pointers to each local object variable. -> The stack is a structure within the Threads area, which is in native memory. -> A stack frame is popped from the stack when a method completes, freeing up the space it occupied. Each has its own purpose and its own method of organization. A good understanding of the #stack and the #heap helps you to plan memory usage more efficiently, as well as being useful for #Troubleshooting and #PerformanceTuning. Read more in this blog: https://lnkd.in/g9kVQfcK
To view or add a comment, sign in
-
HashMap Collisions in Java — The Hidden Performance Trap Most Developers Ignore Ever wondered what really happens when two keys land in the same HashMap bucket? That’s called a collision—and it can silently turn your O(1) lookup into O(n). Let’s break it down. What is a Collision? When two different keys produce the same hash index, they end up in the same bucket. How Java Handles Collisions (Internally) 1) Linked List (Before Java 8) Colliding entries were stored in a linked list. Worst-case lookup time: O(n) 2) Tree Structure (Java 8+) If a bucket has more than 8 entries, Java converts it into a Red-Black Tree. Lookup becomes O(log n) instead of O(n). If entries drop below 6, it converts back to a linked list. Why This Matters in Real Systems • Poor hashCode() implementations can severely degrade performance • Hot buckets can cause latency spikes in microservices • Hash collision attacks can be used for denial-of-service scenarios Pro Tips for Developers • Always override hashCode() and equals() correctly • Use immutable keys (String, Integer, custom immutable objects) • Avoid poor hash functions (e.g., returning constant values) • Prefer ConcurrentHashMap for high-concurrency systems HashMap is fast not because it’s magical, but because its collision strategy is smart. If this helped, comment “MAP” and I’ll share a visual diagram explaining HashMap internals. Follow me on LinkedIn : https://lnkd.in/dE4zAAQC #Java #HashMap #SystemDesign #BackendEngineering #Microservices #DSA #JavaDeveloper #interviewpreparation
To view or add a comment, sign in
-
𝐎𝐮𝐭 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞 𝐎𝐥𝐝, 𝐈𝐧 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞 𝐍𝐞𝐰 Java has evolved, and with it, a simpler, more modern approach to writing immutable data types records. In previous versions of Java, creating simple value objects required a significant amount of boilerplate code. 𝐓𝐡𝐞 𝐎𝐥𝐝 𝐖𝐚𝐲 public class Point { private final int x, y; public Point(int x, int y) { this.x = x; this.y = y; } public int getX() { return x; } public int getY() { return y; } @Override public boolean equals(Object obj) { ... } @Override public int hashCode() { ... } @Override public String toString() { ... } } 𝐓𝐡𝐞 𝐍𝐞𝐰 𝐖𝐚𝐲 Now, with records, all that boilerplate is handled for you. A record automatically generates A constructor equals(), hashCode(), and toString() methods public record Point(int x, int y) {} When you have simple value objects with immutable data. When you don’t need additional logic like setters, mutable fields, or complex methods. #Java #JavaRecords #Programming #Coding #ImmutableData #BoilerplateCode #CleanCode #Java14 #ModernJava #SoftwareDevelopment #CodeSimplification #ObjectOrientedProgramming #JavaBestPractices #JavaTips #JavaDeveloper #TechTrends #DeveloperLife #JavaSyntax #JavaProgramming #RecordClass #TechInnovation #CodingTips #JavaCommunity
To view or add a comment, sign in
-
𝗠𝗖𝗣 𝗯𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸 𝗵𝘆𝗽𝗲: "𝗝𝗮𝘃𝗮 / 𝗦𝗽𝗿𝗶𝗻𝗴 𝗕𝗼𝗼𝘁 𝗶𝘀 𝗺𝘂𝗰𝗵 𝗳𝗮𝘀𝘁𝗲𝗿 𝘁𝗵𝗮𝗻 𝗣𝘆𝘁𝗵𝗼𝗻" - 𝗵𝗲𝗿𝗲’𝘀 𝗺𝘆 𝘁𝗮𝗸𝗲 🧠 There’s a wave going around: MCP servers in Java (often Spring Boot) show way lower latency than Python/Node in a popular multi-language benchmark. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗯𝗲𝗶𝗻𝗴 𝗺𝗲𝗮𝘀𝘂𝗿𝗲𝗱? Most of these numbers measure the MCP server runtime itself: JSON-RPC handling, routing, tool invocation overhead - not the full "LLM + network + external API" end-to-end experience. (see ->https://lnkd.in/d_-f7PfW) 𝗪𝗵𝘆 𝗝𝗮𝘃𝗮 𝗹𝗼𝗼𝗸𝘀 𝘀𝗼 𝗴𝗼𝗼𝗱 𝗵𝗲𝗿𝗲 ⚡ JVM handles concurrency very well and can keep tail latency stable under load 🧵 If your MCP server does fan-out tool calls, Java’s concurrency model shines 🧰 Spring ecosystem gives production basics fast (config, security, metrics, observability) 𝗧𝗵𝗲 "𝗯𝘂𝘁" 𝘁𝗵𝗮𝘁 𝗵𝘆𝗽𝗲 𝗼𝗳𝘁𝗲𝗻 𝗼𝗺𝗶𝘁𝘀 In the benchmark that’s spreading, Java and Go are both sub-millisecond avg latency, but Go is dramatically more memory-efficient, while Java uses much more RAM. So the real story is not "Java wins, Python loses" - it’s trade-offs: ✅ Java - great latency characteristics, huge ecosystem, “enterprise default” ✅ Go - similar speed, far lower memory footprint (cloud-friendly) ✅ Python/Node - often totally fine for glue layers and moderate traffic 𝗠𝘆 𝗼𝗽𝗶𝗻𝗶𝗼𝗻 If your MCP server is a true high-QPS gateway with lots of parallel tool calls, Java/Spring is a very reasonable choice. If your MCP server mostly calls external services and the LLM/network dominates latency, language choice is often secondary - architecture, timeouts, retries, caching, and observability matter more. #java #springboot #mcp #concurrency #loom #backendengineering #microservices
To view or add a comment, sign in
-
-
🚀 JVM Class Loader – Explained Visually Ever wondered how Java loads your classes before execution? This image breaks down the JVM Class Loading mechanism step by step 👇 🔹 1. From Source Code to Bytecode We start with MyClass.java The Java compiler (javac) converts it into bytecode → MyClass.class Bytecode is platform-independent and ready for the JVM 🔹 2. Class Loaders in JVM JVM uses a hierarchical class loading system: 🔸 Bootstrap Class Loader Loads core Java classes (java.lang, java.util, etc.) Comes from rt.jar (or module system in Java 9+) 🔸 Extension Class Loader Loads classes from the extensions directory Optional libraries provided to JVM 🔸 Application Class Loader Loads application-level classes From classpath (-cp, -classpath) 👉 Parent Delegation Model ensures security (Class request goes parent → child, not the other way around) 🔹 3. Runtime Memory Areas Once classes are loaded, they live in JVM memory: 📌 Method Area – Class metadata, bytecode, static variables 📌 Heap – Objects and instances 📌 Stack – Method calls and local variables 🔹 4. Linking Phase Before execution, JVM performs: Verify – Bytecode safety checks Prepare – Allocate memory for static fields Resolve – Convert symbolic references to actual memory references 🔹 5. Initialization & Execution Static blocks execute main() starts Application begins running 🎯 💡 Why this matters? Helps debug ClassNotFoundException & NoClassDefFoundError Crucial for performance tuning, frameworks, and JVM internals A must-know concept for senior Java developers #Java #JVM #ClassLoader #JavaInternals #BackendDevelopment #SoftwareEngineering #InterviewPrep #JavaDeveloper
To view or add a comment, sign in
-
-
🚀 Experimenting with Multithreading in Java – Real Performance Impact Recently, I built a multi-threaded web crawler in Java to understand the real-world impact of concurrency. The crawler scrapes product data (title + price) from a paginated website and stores it in a CSV file. 🧪 The Experiment: I ran the same crawler with different thread pool sizes. Case 1: Single Thread Execution time: ~678 seconds Tasks executed sequentially. Each HTTP request completed before the next one started. Case 2: 20 Threads (FixedThreadPool(20)) Execution time dropped dramatically. Multiple product pages were fetched in parallel, significantly reducing total runtime. 💡 Key Insight: The crawler is I/O-bound, not CPU-bound. Most of the time is spent waiting on network calls and server responses. While one thread waits for a response, other threads can continue working. That’s where multithreading creates massive performance gains. 📌 What I Learned: Thread pools drastically improve throughput for I/O-heavy systems. Too many threads can hurt performance due to context switching, memory overhead, and potential server throttling. Optimal thread count depends on CPU cores and the ratio of wait time to compute time. There’s even a formula: Optimal Threads ≈ CPU Cores × (1 + Wait Time / Compute Time) 🏗 Technical Takeaways Used ExecutorService with FixedThreadPool Implemented synchronized CSV storage for thread safety Used awaitTermination() to measure actual execution time Learned the importance of safe resource sharing in concurrent systems This experiment reinforced one key lesson: Multithreading isn’t just about parallelism — it’s about understanding where your system actually waits. #Java #Multithreading #BackendDevelopment #PerformanceEngineering #Concurrency
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development