#Post2 In my previous post, we explored how HashMap works internally. (If you haven't read it yet, you can find it here: https://lnkd.in/dnwPP-iv) One important point mentioned there was: HashMap is not thread safe. But what does that actually mean? Let’s understand it with an example 👇 Imagine two threads trying to update the same HashMap at the same time. Example: → map.put("Apple", 10); → map.put("Banana", 20); Since HashMap is not synchronized, both threads may try to modify the internal bucket structure at the same time. This can cause problems like: • Data inconsistency • Lost updates Why does this happen? Because operations like put() and resize() involve multiple internal steps. If two threads interleave during these steps, the internal structure can become corrupted. Example scenario: • Thread-1 is inserting an entry • Thread-2 is resizing the HashMap Both modify the bucket array concurrently → leading to unpredictable results. So how do we solve this problem? One option is to synchronize access to the HashMap. This can be done using a synchronized block or by using Collections.synchronizedMap(). However, these approaches lock the entire map, which reduces concurrency when multiple threads try to access it. Java therefore provides a better alternative: ConcurrentHashMap Instead of locking the entire map, ConcurrentHashMap uses finer-grained locking and internal concurrency mechanisms, allowing multiple threads to safely access the map. This allows: • High concurrency • Better performance compared to synchronized maps • Thread-safe read and write operations Example usage → ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>(); → map.put("Apple", 10); Multiple threads can safely read and write without corrupting the data structure. Key takeaway • Use HashMap when working in a single-threaded environment • Use ConcurrentHashMap when multiple threads need concurrent read/write In the next post, we’ll explore how ConcurrentHashMap works internally and how it achieves thread safety. #Java #SoftwareEngineering #BackendDevelopment #Multithreading #Programming #ConcurrentProgramming
ConcurrentHashMap vs HashMap: Thread Safety and Performance
More Relevant Posts
-
#Post3 In our previous posts, we explored: • How HashMap works internally: https://lnkd.in/db--5_JJ • Why HashMap is not thread safe: https://lnkd.in/d6dAMf8q Now let’s understand something more important 👇 How does ConcurrentHashMap achieve thread safety? At a high level, ConcurrentHashMap is designed to provide: • Thread safety • High performance • Better concurrency But how does it actually do that? Instead of locking the entire map, ConcurrentHashMap uses a combination of techniques: 1. CAS (Compare-And-Swap) Before applying any lock, ConcurrentHashMap first tries to update the value using CAS. → Update the value only if it has not been changed by another thread If successful → no locking required If failed → retry or use locking This reduces unnecessary locking and improves performance. We will learn more about CAS in future posts, but for now think of CAS as an atomic operation (either it succeeds or nothing changes). 2. Bucket-Level Locking If the operation cannot be completed using CAS, then locking is applied. Unlike synchronized HashMap (which locks the whole map), ConcurrentHashMap locks only a specific bucket (or node) when required. Example: • Thread-1 → working on bucket 2 • Thread-2 → working on bucket 5 Both can proceed in parallel without blocking each other. Working of get() and put() in ConcurrentHashMap 1. Lock-Free Reads (get) One of the most important optimizations: Reads do NOT require locking. ConcurrentHashMap uses volatile variables to ensure visibility across threads. This makes read operations extremely fast. We will explore the volatile keyword in upcoming posts. 2. How put() Works (Simplified) When inserting a value: • Calculate hash • Find bucket index Then: → If bucket is empty → insert using CAS (no lock) (Do not consider CAS will be successful only when bucket is empty/null) → If bucket is not empty → lock that bucket and update This approach ensures: • Minimum locking • High throughput • Better scalability Key takeaway ConcurrentHashMap achieves thread safety by: • Using CAS to avoid unnecessary locks • Locking only specific buckets instead of the whole map • Allowing lock-free read operations This is why it performs much better than synchronized maps in multi-threaded environments. In the next post, we’ll explore: • CAS (Compare-And-Swap) in detail • Volatile keyword Verdict : In our further posts we will shift our attention towards working of Thread in Java, so that concepts like CAS, volatile, and many more sounds familiar. #Java #SoftwareEngineering #BackendDevelopment #Multithreading #ConcurrentProgramming #Programming
#Post2 In my previous post, we explored how HashMap works internally. (If you haven't read it yet, you can find it here: https://lnkd.in/dnwPP-iv) One important point mentioned there was: HashMap is not thread safe. But what does that actually mean? Let’s understand it with an example 👇 Imagine two threads trying to update the same HashMap at the same time. Example: → map.put("Apple", 10); → map.put("Banana", 20); Since HashMap is not synchronized, both threads may try to modify the internal bucket structure at the same time. This can cause problems like: • Data inconsistency • Lost updates Why does this happen? Because operations like put() and resize() involve multiple internal steps. If two threads interleave during these steps, the internal structure can become corrupted. Example scenario: • Thread-1 is inserting an entry • Thread-2 is resizing the HashMap Both modify the bucket array concurrently → leading to unpredictable results. So how do we solve this problem? One option is to synchronize access to the HashMap. This can be done using a synchronized block or by using Collections.synchronizedMap(). However, these approaches lock the entire map, which reduces concurrency when multiple threads try to access it. Java therefore provides a better alternative: ConcurrentHashMap Instead of locking the entire map, ConcurrentHashMap uses finer-grained locking and internal concurrency mechanisms, allowing multiple threads to safely access the map. This allows: • High concurrency • Better performance compared to synchronized maps • Thread-safe read and write operations Example usage → ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>(); → map.put("Apple", 10); Multiple threads can safely read and write without corrupting the data structure. Key takeaway • Use HashMap when working in a single-threaded environment • Use ConcurrentHashMap when multiple threads need concurrent read/write In the next post, we’ll explore how ConcurrentHashMap works internally and how it achieves thread safety. #Java #SoftwareEngineering #BackendDevelopment #Multithreading #Programming #ConcurrentProgramming
To view or add a comment, sign in
-
🚀 Day 5 – What Really Happens Inside a HashMap? We use "HashMap" almost everywhere, but its internal working is quite interesting. Map<String, Integer> map = new HashMap<>(); map.put("key", 1); What happens internally? 👉 Step 1: "hashCode()" is calculated for the key 👉 Step 2: Hash is converted into an index (bucket location) 👉 Step 3: Value is stored in that bucket 💡 But what if two keys land in the same bucket? ✔ This is called a collision ✔ Java handles it using a Linked List (and converts to a Tree if it grows large) ⚠️ Important: - Retrieval ("get") again uses "hashCode()" + ".equals()" - So both must be properly implemented for custom objects 💡 Real takeaway: Good hashing = better performance Poor hashing = more collisions = slower operations This is why "HashMap" is fast on average (O(1)), but can degrade if not used properly. #Java #BackendDevelopment #HashMap #JavaInternals #LearningInPublic
To view or add a comment, sign in
-
🚀 Day 5/30 — LeetCode Challenge Solved Merge Two Sorted Lists on using Java. This problem focuses on pointer manipulation in linked lists — merging two sorted lists efficiently without creating extra space. ✅ Key takeaway: Understanding how to move and manage pointers is crucial when working with linked data structures. Time Complexity: O(n + m) Space Complexity: O(1) (in-place merge) Building consistency while strengthening fundamentals in data structures. #LeetCode #Java #LinkedList #DataStructures #ProblemSolving #Consistency
To view or add a comment, sign in
-
-
𝐃𝐚𝐲 𝟖𝟑/𝟑𝟔𝟓 🚀 📌 𝐋𝐞𝐞𝐭𝐂𝐨𝐝𝐞 𝐏𝐎𝐓𝐃: 𝐌𝐢𝐧𝐢𝐦𝐮𝐦 𝐃𝐢𝐬𝐭𝐚𝐧𝐜𝐞 𝐁𝐞𝐭𝐰𝐞𝐞𝐧 𝐓𝐡𝐫𝐞𝐞 𝐄𝐪𝐮𝐚𝐥 𝐄𝐥𝐞𝐦𝐞𝐧𝐭𝐬 𝐈𝐈 Continuing my 𝟑𝟔𝟓 𝐃𝐚𝐲𝐬 𝐨𝐟 𝐂𝐨𝐝𝐞 journey with a focus on 𝐩𝐫𝐨𝐛𝐥𝐞𝐦-𝐬𝐨𝐥𝐯𝐢𝐧𝐠, 𝐃𝐒𝐀, 𝐚𝐧𝐝 𝐜𝐨𝐧𝐬𝐢𝐬𝐭𝐞𝐧𝐜𝐲. 💪 🔎 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡: Store indices of each value using a HashMap. For values appearing at least 3 times, check consecutive triplets of indices. Compute distance using the formula: |i-j| + |j-k| + |i-k| Track the minimum distance. 🔍 𝐀𝐥𝐠𝐨𝐫𝐢𝐭𝐡𝐦 𝐮𝐬𝐞𝐝: HashMap + sliding window on index lists. ⏱ 𝐓𝐢𝐦𝐞 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲: 𝐎(𝐧) 🧠 𝐒𝐩𝐚𝐜𝐞 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲: 𝐎(𝐧) 📈 𝐊𝐞𝐲 𝐭𝐚𝐤𝐞𝐚𝐰𝐚𝐲: Grouping indices by value can drastically reduce complexity in duplicate-based problems. #LeetCode #LeetCodeDaily #365DaysOfCode #DSA #Java #HashMap #Arrays #Optimization #ProblemSolving #LearningInPublic 👨💻 🔗 Problem link in comments 👇
To view or add a comment, sign in
-
-
Day 95 of #365DaysOfLeetCode Challenge Today’s problem: **Path Sum III (LeetCode 437)** This one looks like a typical tree problem… but the optimal solution introduces a powerful concept: **Prefix Sum + HashMap in Trees** 💡 **Core Idea:** Instead of checking every possible path (which would be slow), we track **running sums** as we traverse the tree. 👉 If at any point: `currentSum - targetSum` exists in our map → we’ve found a valid path! 📌 **Approach:** * Use DFS to traverse the tree * Maintain a running `currSum` * Store prefix sums in a HashMap * Check how many times `(currSum - targetSum)` has appeared * Backtrack to maintain correct state ⚡ **Time Complexity:** O(n) ⚡ **Space Complexity:** O(n) **What I learned today:** Prefix Sum isn’t just for arrays — it can be **beautifully extended to trees**. This problem completely changed how I look at tree path problems: 👉 From brute-force traversal → to optimized prefix tracking 💭 **Key takeaway:** When a problem involves “subarray/paths with a given sum,” think: ➡️ Prefix Sum + HashMap #LeetCode #DSA #BinaryTree #PrefixSum #CodingChallenge #ProblemSolving #Java #TechJourney #Consistency
To view or add a comment, sign in
-
-
𝑯𝒂𝒔𝒉𝑴𝒂𝒑 𝒍𝒐𝒐𝒌𝒔 𝒔𝒊𝒎𝒑𝒍𝒆 𝒇𝒓𝒐𝒎 𝒐𝒖𝒕𝒔𝒊𝒅𝒆 𝑩𝒖𝒕 𝒊𝒏𝒕𝒆𝒓𝒏𝒂𝒍𝒍𝒚, 𝒕𝒉𝒆𝒓𝒆’𝒔 𝒂 𝒍𝒐𝒕 𝒉𝒂𝒑𝒑𝒆𝒏𝒊𝒏𝒈 𝒕𝒉𝒂𝒕 𝒎𝒐𝒔𝒕 𝒐𝒇 𝒖𝒔 𝒎𝒊𝒔𝒔 𝑾𝒆 𝒂𝒍𝒍 𝒖𝒔𝒆 𝑯𝒂𝒔𝒉𝑴𝒂𝒑 𝑩𝒖𝒕 𝒉𝒂𝒗𝒆 𝒚𝒐𝒖 𝒆𝒗𝒆𝒓 𝒕𝒓𝒊𝒆𝒅 𝒆𝒙𝒑𝒍𝒂𝒊𝒏𝒊𝒏𝒈 𝒊𝒕𝒔 𝒊𝒏𝒕𝒆𝒓𝒏𝒂𝒍 𝒘𝒐𝒓𝒌𝒊𝒏𝒈 𝒘𝒊𝒕𝒉𝒐𝒖𝒕 𝒈𝒆𝒕𝒕𝒊𝒏𝒈 𝒔𝒕𝒖𝒄𝒌 ? 🔍 𝐈𝐧𝐭𝐞𝐫𝐧𝐚𝐥 𝐖𝐨𝐫𝐤𝐢𝐧𝐠 𝐨𝐟 𝐇𝐚𝐬𝐡𝐌𝐚𝐩 ➡️ 𝐖𝐡𝐞𝐧 𝐇𝐚𝐬𝐡𝐌𝐚𝐩 𝐢𝐬 𝐜𝐫𝐞𝐚𝐭𝐞𝐝 ▪️An array of 16 buckets is created (default capacity) ▪️Each bucket acts like a LinkedList ➡️ 𝐖𝐡𝐞𝐧 𝐰𝐞 𝐢𝐧𝐬𝐞𝐫𝐭 (𝐤𝐞𝐲, 𝐯𝐚𝐥𝐮𝐞) map.put(key, value); 📃 Step 1: Hash Calculation ▪️ First, hashCode() is generated for the key 📃 Step 2: Index Calculation ▪️ Index is calculated using: index = (n - 1) & hashCode "n" = size of array (default 16) Result will be between 0 to 15 📃 Step 3: Storing in Bucket ▪️ This index decides which bucket to store data in ▪️Each bucket stores data as: (key, value) → (key, value) (LinkedList) 🔁 𝐖𝐡𝐚𝐭 𝐢𝐟 𝐛𝐮𝐜𝐤𝐞𝐭 𝐚𝐥𝐫𝐞𝐚𝐝𝐲 𝐡𝐚𝐬 𝐝𝐚𝐭𝐚(𝐂𝐨𝐥𝐥𝐢𝐬𝐢𝐨𝐧) ⛓️ HashMap checks for existing keys ▫️Uses equals() method ▫️Compares new key with existing keys ➡️ 𝐏𝐨𝐬𝐬𝐢𝐛𝐥𝐞 𝐂𝐚𝐬𝐞𝐬 : 1️⃣ If no key matches ▪️ New (key, value) is added 2️⃣ If key matches ▪️Old value is overridden ⚠️ 𝐈𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭 𝐈𝐧𝐬𝐢𝐠𝐡𝐭 ➡️ A single bucket can hold multiple key-value pairs ➡️ Commonly up to 7 nodes in LinkedList, after that it may convert into a Tree (in newer Java versions) 🔄 𝐑𝐞𝐬𝐢𝐳𝐢𝐧𝐠 (𝐑𝐞𝐡𝐚𝐬𝐡𝐢𝐧𝐠) When 75% (3/4) of buckets are filled ▫️ Capacity increases from 16 → 32 ▫️ New array is created ▫️ All existing elements are rehashed & moved 🔖 𝐊𝐞𝐲 𝐏𝐨𝐢𝐧𝐭𝐬 ▫️ Default capacity → 16 ▫️ Uses LinkedList (or Tree in newer versions) ▫️ Uses equals() to compare keys ▫️ Uses bitwise AND for index calculation 💭 One Line Summary ➡️ HashMap is not random — it’s intelligently organized #Java #HashMap #JavaDeveloper #DataStructures #Programming #TechJourney #LearnBySharing #InterviewPrep #BackendDevelopment 🚀
To view or add a comment, sign in
-
📘 DSA Journey — Day 30 Today’s focus: HashMap for frequency counting. Problem solved: • Number of Good Pairs (LeetCode 1512) Concepts used: • HashMap • Frequency counting • Incremental counting technique Key takeaway: The goal is to count pairs (i, j) such that nums[i] == nums[j] and i < j. Instead of checking all pairs (which would take O(n²)), we use a HashMap to track frequencies. As we iterate through the array: • For each number, we check how many times it has already appeared • That count directly contributes to the number of valid pairs • Then we update its frequency in the map This allows counting pairs in a single pass with O(n) time complexity. Continuing to strengthen fundamentals and consistency in problem solving. #DSA #Java #LeetCode #CodingJourney
To view or add a comment, sign in
-
-
Most beginners think HashMap is “just a data structure.” It’s not. It’s the backbone of performance in Java. Here’s how it actually works (simple): • Stores data as key-value pairs • Uses hashing to find index • Handles collisions using chaining / tree Why it matters: Fast lookups → better performance → scalable systems If you don’t understand this, you’re just coding… not engineering. Have you ever debugged a HashMap issue?
To view or add a comment, sign in
-
🚀 Day 15 of LeetCode Problem Solving Solved today’s problem — LeetCode #49: Group Anagrams 💻🔥 ✅ Approach: HashMap + Sorting ⚡ Time Complexity: O(n * k log k) 📊 Space Complexity: O(n * k) The task was to group strings that are anagrams of each other. 👉 I used a HashMap where: Key = sorted version of string Value = list of anagrams 💡 Key Idea: If two strings are anagrams, their sorted form will be the same. 👉 Core Logic: Convert string → char array Sort the array Use sorted string as key Store original string in map 💡 Key Learning: Transforming data (like sorting strings) can help in identifying patterns and grouping efficiently. Consistency is the key — learning something new every day 🚀 #Day15 #LeetCode #DSA #Java #CodingJourney #ProblemSolving #100DaysOfCode
To view or add a comment, sign in
-
-
🚀 Day 99 of My 100 Days LeetCode Challenge | Java Today’s problem was all about randomization and linked list traversal — a nice break from heavy DP and matrices. The challenge was to design a system that returns a random node’s value from a singly linked list, ensuring that every node has an equal probability of being chosen. Since linked lists don’t allow direct indexing, the key idea was to first determine the size of the list, and then generate a random index to fetch the corresponding node. This approach ensures uniform randomness while keeping the implementation simple and efficient. ✅ Problem Solved: Linked List Random Node ✔️ All test cases passed (8/8) ⏱️ Runtime: 11 ms 🧠 Approach: Linked List Traversal + Randomization 🧩 Key Learnings: ● Randomization problems require ensuring uniform probability distribution. ● Linked lists limit direct access, so traversal becomes essential. ● Precomputing size can simplify random selection. ● Sometimes simple approaches are the most effective. ● Understanding data structure limitations helps design better solutions. This problem highlighted how probability + data structures can come together in elegant ways. 🔥 Day 99 complete — sharpening my understanding of randomization and linked list behavior. #LeetCode #100DaysOfCode #Java #LinkedList #Randomization #Algorithms #ProblemSolving #DSA #CodingJourney #Consistency
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development