🔍 HashMap Under the Hood When working with HashMaps, collisions are normal. What matters is how we handle them and when we resize the map. 🔹 Collision Handling Techniques 1️⃣ Chaining If two keys map to same index, we store multiple elements at that index using a LinkedList. Example: 12, 17, 22 → all hash to index 2 Index 2 → 12 → 17 → 22 Simple and commonly used. 2️⃣ Open Addressing If index is full, find another empty slot. Linear Probing: Check next index sequentially. Index 2 full → check 3 → check 4 → insert Quadratic Probing: Check using square jumps. Index 2 → 2+1²=3 → 2+2²=6 → 2+3²=11 Reduces clustering compared to linear probing. 3️⃣ Double Hashing Use a second hash function to find a new index. Formula: newIndex = (hash1 + i × hash2) % tableSize This gives better distribution and very less clustering. 🔹 What is Resizing (Rehashing)? When HashMap becomes too full, performance drops. So HashMap increases its size and rehashes all elements again. Condition: Resize when elements > capacity × load factor In Java HashMap: capacity = 16 load factor = 0.75 Resize when elements > 12 Resizing Steps: 1. Create new bigger array 2. Recalculate hash for all elements 3. Insert again into new array Rehashing is expensive but happens rarely. 💡 Easy Summary Chaining → LinkedList at same index Linear Probing → Next index Quadratic Probing → Square jump Double Hashing → Second hash function Resizing → Increase table size & rehash 👉 If you are preparing for Java backend interviews, connect & follow - I share short, practical backend concepts regularly. #Java #HashMap #DSA #BackendDevelopment #SystemDesign #Programming
HashMap Collision Handling and Resizing Techniques
More Relevant Posts
-
🚀 Day 87 — Prefix Sum Pattern (Subarray Sum Equals K) Continuing the prefix sum deep dive — today I solved the classic problem of counting subarrays that sum to a given value k. This is where the HashMap optimization truly shines. 📌 Problem Solved: LeetCode 560 – Subarray Sum Equals K 🧠 Key Insight – Prefix Sum + HashMap: java Map<Integer,Integer> map = new HashMap<>(); map.put(0, 1); // empty prefix sum int sum = 0, res = 0; for (int num : nums) { sum += num; int diff = sum - k; res += map.getOrDefault(diff, 0); map.put(sum, map.getOrDefault(sum, 0) + 1); } return res; Why this works: sum at index i = prefix sum from 0 to i. If sum - k exists as a prefix sum before, then the subarray between that previous index and i sums to k. HashMap stores frequency of each prefix sum, allowing O(1) lookups. Why not brute force? Brute force: O(n²) — check every subarray. Prefix sum + HashMap: O(n) time, O(n) space. Edge cases to handle: k = 0 → subarrays summing to zero. Negative numbers → prefix sums can decrease, but HashMap still works. Initial map.put(0,1) ensures subarrays starting at index 0 are counted. 💡 Takeaway: When the problem asks for “number of subarrays with sum = k”, think prefix sum + HashMap immediately. This is the second category from the prefix sum flowchart — exact sums require hash‑based lookup. No guilt about past breaks — just building pattern recognition, one problem at a time. #DSA #PrefixSum #SubarraySumEqualsK #LeetCode560 #HashMap #CodingJourney #Revision #Java #ProblemSolving #Consistency #GrowthMindset #TechCommunity #LearningInPublic
To view or add a comment, sign in
-
🚀 Java Deep Dive Series — Memory Management AI can write code. But understanding how memory works under the hood is what helps you write efficient and bug-free systems. Today, I revisited: 👉 Java Memory Management Here’s a quick breakdown 👇 🔹 Stack Memory → Method calls, local variables (thread-specific) 🔹 Heap Memory → Objects & shared data (managed by GC) 🔹 References → Strong, Weak, Soft (impact GC behavior) 🔹 Garbage Collection → Automatic memory cleanup 🔹 Generational Memory → Young → Old lifecycle ⚙️ Deep dive covered: Stack vs Heap with detailed examples, object lifecycle (Eden → Survivor → Old Gen), Minor vs Major GC, reference types behavior, Mark & Sweep algorithm, Metaspace (non-heap), different GC types (Serial, Parallel, G1, ZGC), and JVM tuning using -Xms & -Xmx. 💡 My Key Takeaway: Most performance issues in Java are not about logic — they come from how memory is used and managed. 📘 I’ve documented detailed notes (with examples & diagrams) here: 🔗 [https://lnkd.in/dsMypxEG] I’ll keep adding more topics as I go. If you're revising Java fundamentals or preparing for interviews, this might help 🤝 👉 Quick check: What kind of reference allows GC even when a reference still exists? #Java #JVM #MemoryManagement #GarbageCollection #BackendDevelopment #AI
To view or add a comment, sign in
-
𝗧𝗵𝗲 𝗝𝗮𝘃𝗮 𝘀𝘄𝗶𝘁𝗰𝗵 𝘀𝘁𝗮𝘁𝗲𝗺𝗲𝗻𝘁 𝘁𝘂𝗿𝗻𝗲𝗱 𝟯𝟬 𝘁𝗵𝗶𝘀 𝘆𝗲𝗮𝗿 Here's the short version: What started as a C-style branching construct is now a declarative data-matching engine — and the JVM internals behind it are genuinely fascinating. What I learned going deep on this: → Early switch relied on jump tables — fast, but fall-through bugs were silent and destructive → Java added definite assignment rules, preventing uninitialized variables from slipping through → The JVM picks between tableswitch (O(1)) and lookupswitch (O(log n)) based on how dense your cases are → String switching since Java 7 uses hashCode + equals internally — it's not magic, it's two passes → Java 14 made switch an expression, which killed fall-through at the language level → Modern Java (21+) adds pattern matching with type binding and null handling — code reads like a description of data → invokedynamic enables runtime linking, replacing rigid compile-time dispatch tables → Java 25 enforces unconditional exactness in type matching — no more silent data loss • The real shift isn't syntax. It's the question switch answers. Old: "Where should execution go?" New: "What is the shape of this data?" That's not just a feature upgrade. That's a change in how you think about branching. Which of these surprised you most? Drop it in the comments. A special thanks to Syed Zabi Ulla sir at PW Institute of Innovation for their clear explanations and continuous guidance throughout this topic. #Java #Programming #SoftwareEngineering #JVM #LearningInPublic #CodingJourney
To view or add a comment, sign in
-
🚀 Day 89 — Prefix Sum Pattern (Subarray Sums Divisible by K) Extending the prefix sum + HashMap pattern — today I solved a variation where we count subarrays whose sum is divisible by K. The key is handling negative remainders correctly. 📌 Problem Solved: LeetCode 974 – Subarray Sums Divisible by K 🧠 Optimized Template with Remainder Handling: java public int subarraysDivByK(int[] nums, int k) { Map<Integer, Integer> freq = new HashMap<>(); freq.put(0, 1); int prefixSum = 0, count = 0; for (int num : nums) { prefixSum += num; int rem = ((prefixSum % k) + k) % k; // handles negative remainders count += freq.getOrDefault(rem, 0); freq.put(rem, freq.getOrDefault(rem, 0) + 1); } return count; } Why this works: Subarray sum from i+1 to j is divisible by k if prefixSum[j] ≡ prefixSum[i] (mod k). We store frequency of each remainder. For current remainder rem, any previous occurrence of rem forms a valid subarray. Critical Point – Negative Remainders: In Java, % can return negative values. The expression ((prefixSum % k) + k) % k maps every remainder to [0, k-1] correctly. Comparison to Subarray Sum Equals K (LeetCode 560): 560: Exact sum → look for prefixSum - k. 974: Divisible by K → look for same remainder modulo K. Both use the same HashMap pattern — just the lookup key changes. 💡 Takeaway: Prefix sum + HashMap is a template that adapts to: Exact target sum → store prefix sums Divisible by K → store remainders Multiple of K → similar logic No guilt about past breaks — just mastering pattern variations. #DSA #PrefixSum #SubarraySumsDivisibleByK #LeetCode974 #HashMap #CodingJourney #Revision #Java #ProblemSolving #Consistency #GrowthMindset #TechCommunity #LearningInPublic
To view or add a comment, sign in
-
🚀 Day 21/30 – DSA Challenge 📌 LeetCode Problem – Median of Two Sorted Arrays 📝 Problem Statement Given two sorted arrays nums1 and nums2, find the median of the combined array. 📌 Example Input: nums1 = [1,2] nums2 = [3,4] Output: 2.5 💡 My Approach Instead of using complex binary search, I followed a simple and reliable method: 👉 Merge both arrays 👉 Sort the merged array 👉 Find the median This approach is easy to understand and implement. 🚀 Algorithm 1️⃣ Create a new array of size n1 + n2 2️⃣ Copy elements of both arrays 3️⃣ Sort the merged array 4️⃣ If length is odd → return middle element 5️⃣ If even → return average of two middle elements ✅ Java Code import java.util.Arrays; class Solution { public double findMedianSortedArrays(int[] nums1, int[] nums2) { int n1 = nums1.length; int n2 = nums2.length; int[] merged = new int[n1 + n2]; for (int i = 0; i < n1; i++) { merged[i] = nums1[i]; } for (int i = 0; i < n2; i++) { merged[n1 + i] = nums2[i]; } Arrays.sort(merged); int n = merged.length; if (n % 2 == 1) { return merged[n / 2]; } else { return (merged[n / 2 - 1] + merged[n / 2]) / 2.0; } } } ⏱ Complexity Time Complexity: O((n + m) log(n + m)) Space Complexity: O(n + m) 📚 Key Learnings – Day 21 ✔ Simple solutions are often easiest to implement ✔ Always understand problem constraints ✔ This problem can be optimized further using binary search ✔ Multiple approaches exist — choose based on context Simple approach. Clear logic. Strong understanding. Day 21 completed. Consistency continues 💪🔥 #30DaysOfCode #DSA #Java #InterviewPreparation #ProblemSolving #CodingJourney #Arrays #LeetCode
To view or add a comment, sign in
-
-
𝐯𝐚𝐫 𝐤𝐞𝐲𝐰𝐨𝐫𝐝 𝐢𝐧 𝐉𝐚𝐯𝐚 Introduced in Java 10, the var reserved type name is a game-changer for reducing boilerplate code—but it comes with specific "rules of the road." Is it a keyword? Technically, no! It’s a 𝐫𝐞𝐬𝐞𝐫𝐯𝐞𝐝 𝐭𝐲𝐩𝐞 𝐧𝐚𝐦𝐞 that uses 𝐓𝐲𝐩𝐞 𝐈𝐧𝐟𝐞𝐫𝐞𝐧𝐜𝐞 to automatically detect data types based on the context. Here is a quick cheat sheet on the Dos and Don'ts: 𝐖𝐡𝐞𝐧 𝐭𝐨 𝐮𝐬𝐞 '𝐯𝐚𝐫': 𝐋𝐨𝐜𝐚𝐥 𝐕𝐚𝐫𝐢𝐚𝐛𝐥𝐞𝐬: Use it inside methods, blocks, or constructors. 𝐒𝐭𝐚𝐧𝐝𝐚𝐫𝐝 𝐃𝐚𝐭𝐚 𝐓𝐲𝐩𝐞𝐬: Works for int, double, String, etc., as long as an initializer is present. 𝐂𝐥𝐞𝐚𝐧𝐢𝐧𝐠 𝐮𝐩 𝐆𝐞𝐧𝐞𝐫𝐢𝐜𝐬: Turn long declarations into clean, readable lines. 𝐖𝐡𝐞𝐫𝐞 '𝐯𝐚𝐫' 𝐢𝐬 𝐍𝐎𝐓 𝐚𝐥𝐥𝐨𝐰𝐞𝐝: 𝐈𝐧𝐬𝐭𝐚𝐧𝐜𝐞 𝐕𝐚𝐫𝐢𝐚𝐛𝐥𝐞𝐬: You cannot use it for class-level fields. 𝐌𝐢𝐬𝐬𝐢𝐧𝐠 𝐈𝐧𝐢𝐭𝐢𝐚𝐥𝐢𝐳𝐞𝐫𝐬: You can't just declare var x;. The compiler needs to see the value immediately. 𝐍𝐮𝐥𝐥 𝐕𝐚𝐥𝐮𝐞𝐬: var x = null; won't work because the compiler can't infer a type from null. 𝐋𝐚𝐦𝐛𝐝𝐚𝐬: These need an explicit target type, so var is a no-go here. 𝐌𝐞𝐭𝐡𝐨𝐝 𝐒𝐩𝐞𝐜𝐬: It cannot be used for method parameters or return types. 𝐓𝐡𝐞 𝐆𝐨𝐥𝐝𝐞𝐧 𝐑𝐮𝐥𝐞: var is meant to improve 𝐫𝐞𝐚𝐝𝐚𝐛𝐢𝐥𝐢𝐭𝐲. If using it makes the code harder to understand, stick to explicit types! Special thanks to Syed Zabi Ulla Sir for the clear breakdown and guidance on these core Java concepts! #Java #Programming #CodingTips #BackendDevelopment #Java10 #SoftwareEngineering #CleanCode
To view or add a comment, sign in
-
-
Peglib 0.2.0 + 0.2.1 -- Major code generation reliability release Peglib is a PEG parser library for Java, inspired by cpp-peglib. It lets you define parsers using PEG grammar syntax, with support for both runtime interpretation and standalone source code generation. These two releases focused on one goal: making the generated standalone parser produce identical results to the interpreted parser. Every time. What changed: -- Rewrote the CST/AST code generator from the ground up to structurally mirror the interpreter's control flow. Identified and fixed 7 behavioral divergences in whitespace handling, predicate evaluation, cut failure propagation, and token boundary tracking. -- Fixed generated parsers crashing with StackOverflowError when the whitespace directive references named rules like LineComment or BlockComment. Added a reentrant guard matching the interpreter's approach. -- Fixed generated Token nodes losing their rule names. Tokens from < > captures now carry the parent rule name (e.g., "SelectKW", "NumericType") instead of a generic "token". -- Fixed CST tree structure. Container expressions (ZeroOrMore, OneOrMore, Optional) now wrap their children in proper NonTerminal nodes instead of flattening them into the parent -- matching how the interpreter builds the tree. -- Added 40 conformance tests that run the same grammars and inputs through both the interpreted and generated parsers, asserting identical success/failure outcomes. These fixes were discovered while building a PostgreSQL SQL parser with peglib. The interpreted parser handled the full grammar correctly, but the generated standalone parser had subtle failures. Now both produce matching results on all 350 tests. Test count: 308 -> 350 pragmatica-lite dependency: 0.9.10 -> 0.24.0 Available on Maven Central: <groupId>org.pragmatica-lite</groupId> <artifactId>peglib</artifactId> <version>0.2.1</version> GitHub: https://lnkd.in/dgdjZahV #java #parsing #peg #opensource #compilers
To view or add a comment, sign in
-
I always thought Class.forName("com.example.MyClass") just "loads a class." Turns out I had no idea what was actually happening. Went down a rabbit hole this week into how Java reflection works under the hood. Here is what I found: When the JVM loads a class, it creates two separate things: An InstanceKlass, a C++ struct in Metaspace. This is the JVM's actual representation of your class. Method table, field table, bytecode, runtime constant pool, all of it lives here. The execution engine works directly off this. A Class<?> object on the heap. This is what your Java code sees. It just holds a pointer back to the InstanceKlass. So every time you call getDeclaredMethods() or getField(), Java is doing this: Class<?> on heap -> klass pointer -> InstanceKlass in Metaspace That boundary crossing, plus access checks, is exactly why reflection has overhead. One more thing that surprised me: the String intern pool is not in Metaspace. It is a native C++ hash table called StringTable inside HotSpot, lives on the heap, uses weak references so unused strings get collected. Completely global across the JVM process. Most Java developers never look at this layer. Once you do, a lot of things that felt like magic start making sense. Going to keep writing about JVM internals. What part of the JVM caught you off guard when you first looked deeper? #Java #JVM #BackendDevelopment #JavaInternals #SpringBoot
To view or add a comment, sign in
-
🚀 Ever wondered what actually happens under the hood when you run a Java program? It’s not just magic; it’s the Java Virtual Machine (JVM) at work. Understanding JVM architecture is the first step toward moving from "writing code" to "optimizing performance." Here is a quick breakdown of the core components shown in the diagram: 1️⃣ Classloader System The entry point. It loads, links, and initializes the .class files. It ensures that all necessary dependencies are available before execution begins. 2️⃣ Runtime Data Areas (Memory Management) This is where the heavy lifting happens. The JVM divides memory into specific areas: Method/Class Area: Stores class-level data and static variables. Heap Area: The home for all objects. This is where Garbage Collection happens! Stack Area: Stores local variables and partial results for each thread. PC Registers: Keeps track of the address of the current instruction being executed. Native Method Stack: Handles instructions for native languages (like C/C++). 3️⃣ Execution Engine The brain of the operation. It reads the bytecode and executes it using: Interpreter: Reads bytecode line by line. JIT (Just-In-Time) Compiler: Compiles hot spots of code into native machine code for massive speed boosts. Garbage Collector (GC): Automatically manages memory by deleting unreferenced objects. 4️⃣ Native Interface & Libraries The bridge (JNI) that allows Java to interact with native OS libraries, making it incredibly versatile. 💡 Pro-Tip: If you are debugging OutOfMemoryError or StackOverflowError, knowing which memory area is failing is half the battle won. #Java #JVM #BackendDevelopment #SoftwareEngineering #ProgrammingTips #TechCommunity #JavaDeveloper #CodingLife
To view or add a comment, sign in
-
-
what is serialization and deserialization ? Serialization is the process of converting an in-memory object (like a class instance in your backend code) into a format that can be easily stored or transmitted, such as: JSON XML Protocol Buffers Example: You have a user object in code: { "id": 1, "name": "Aman" } When you send this over an API, it gets serialized into JSON so it can travel over HTTP. What is Deserialization? Deserialization is the reverse process — converting the transmitted data (like JSON from an API request/response) back into an in-memory object your program can use. Example: You receive this JSON from a request: { "id": 1, "name": "Aman" } Your backend converts it into a language-specific object (e.g., a class instance in Java, Python, Node.js, etc.). Serialization = Packing your object into a suitcase (JSON) to travel Deserialization = Unpacking it back into usable form at the destination
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development