🚀 Day 6 – HashMap vs ConcurrentHashMap (When Thread Safety Matters) Today I explored the difference between "HashMap" and "ConcurrentHashMap". We often use "HashMap" like this: Map<String, Integer> map = new HashMap<>(); 👉 But here’s the catch: "HashMap" is not thread-safe In a multi-threaded environment: - Multiple threads modifying it can lead to data inconsistency - Even cause infinite loops during resizing (rare but critical) So what’s the alternative? Map<String, Integer> map = new ConcurrentHashMap<>(); 👉 "ConcurrentHashMap" is designed for safe concurrent access 💡 Key difference I learned: ✔ "HashMap" - No synchronization - Faster in single-threaded scenarios ✔ "ConcurrentHashMap" - Uses segment-level locking / fine-grained locking - Allows multiple threads to read/write safely ⚠️ Insight: Instead of locking the whole map, it locks only a part of it → better performance than traditional synchronization. 💡 Real-world use: Whenever multiple threads are accessing shared data (like caching, session data), "ConcurrentHashMap" is a safer choice. #Java #BackendDevelopment #Concurrency #JavaInternals #LearningInPublic
HashMap vs ConcurrentHashMap: Thread Safety Matters
More Relevant Posts
-
A recent issue reminded me that performance optimizations can sometimes become production problems. We had an API that: 1️⃣ Fetches initial details 2️⃣ Extracts IDs from the response 3️⃣ Makes another database call to fetch larger secondary data To speed up step 3, parallel processing was introduced using a fixed thread pool. Sounds reasonable — until load testing began. Under heavy traffic, thread creation kept increasing across instances until limits were hit, leading to: ⚠️ "Can't create new native thread" The interesting part? The optimization worked for individual requests. But at scale, the resource model didn’t. A request with a small number of IDs didn’t always need dedicated worker threads, yet threads were still being allocated repeatedly under concurrent load. The fix was moving to a shared/reusable thread pool model with better resource control. 💡 My takeaway: Code that is fast in isolation may fail under concurrency. When designing for performance, it’s important to ask: - How does this behave at 1 request? - How does this behave at 1000 requests? - What resources grow with traffic? Scalability is often less about speed, more about control. #BackendEngineering #Java #PerformanceTesting #Scalability #Concurrency
To view or add a comment, sign in
-
Solved a LeetCode hard in O(n). Couldn't use it in production. Last week I optimized an API endpoint that was timing out under load. The problem: filtering 50,000 order records to find duplicates based on multiple fields (customer_id, amount, timestamp within 5 min window). My first instinct? HashMap for O(n) lookup. Classic LeetCode muscle memory. Wrote it, tested locally flew through 100K records in 200ms. Pushed to staging. Staging worked. Production didn't. Here's what LeetCode doesn't teach you: garbage collection pauses are real. That HashMap was getting rebuilt on every request. With 50 requests/second, the young generation GC was running constantly. P99 latency spiked to 3 seconds because of GC pauses, even though P50 stayed at 200ms. Sure, I could've increased heap size. But that just delays full GC, making it worse when it finally hits. The fix: moved duplicate detection to PostgreSQL using a composite index and window function. Slightly slower average (350ms), but consistent. No GC spikes, predictable P99. LeetCode optimizes for algorithmic efficiency. Production optimizes for predictable latency under sustained load. What's an optimization that looked perfect on paper but failed under real traffic? #Java #Database #SystemDesign #LeetCode
To view or add a comment, sign in
-
🔥 Today’s DSA Update— #Day67 Today I worked on Top K Frequent Elements. 💡 The key idea This problem combines two important concepts: HashMap → to count frequency Min Heap (PriorityQueue) → to keep only the top K elements 💡 How the solution works Count the frequency of each element using a HashMap Use a Min Heap of size K Add elements based on frequency If the heap size exceeds K, remove the least frequent element At the end, the heap contains the K most frequent elements. ⚙️ Complexity ⏱️ Time: O(n log k) 💾 Space: O(n) 🧠 What I learned today HashMap helps us understand the data, and Heap helps us filter what we need. #Java #DSA #Heap #HashMap #LeetCode #ConsistencyCurve
To view or add a comment, sign in
-
I've used HashMap in every Spring Boot service I've written for 1 year. I was reading an article on DEV Community about HashMap vs ConcurrentHashMap. It made me realize 3 situations where HashMap was quietly the wrong choice in my own code. Nobody talks about this. Every post explains how HashMap works. Nobody explains when to stop using it. Here's what I learned: Situation 1 — Multiple threads accessing the same map HashMap is not thread safe. Period. In a Spring Boot service, your beans are singletons by default. If two threads hit the same endpoint simultaneously and both modify a shared HashMap — you get data corruption, infinite loops during rehashing, or silent data loss. No exception. No warning. Just wrong data in production. What to use instead → ConcurrentHashMap It locks at bucket level, not the entire map. Reads are completely lock free. Writes only lock the specific bucket being modified. Situation 2 — You need atomic check then act operations This is the one most developers miss completely. // This looks safe. It is not. if (!map.containsKey(key)) { map.put(key, value); } Two threads can both pass the containsKey check simultaneously and both execute put — overwriting each other's data. What to use instead → ConcurrentHashMap.computeIfAbsent() map.computeIfAbsent(key, k -> value); One atomic operation. Zero race conditions. Situation 3 — High frequency reads with occasional writes For scenarios with 90% reads and 10% writes — like a configuration cache or reference data store — even ConcurrentHashMap's bucket locking adds unnecessary overhead. What to use instead → ReadWriteLock with HashMap or a purpose built cache like Caffeine. Reads acquire shared lock simultaneously. Writes acquire exclusive lock only when updating. Most production bugs don't come from using the wrong algorithm. They come from using the right data structure in the wrong environment. Have you ever hit a production issue caused by HashMap in a multithreaded context? #Java #SpringBoot #HashMap #ConcurrentHashMap #BackendDevelopment #SoftwareEngineering #DSA #FAANG #Multithreading #JavaCollections
To view or add a comment, sign in
-
🚀 Day 16/100: Spring Boot From Zero to Production Topic: Custom Logging We’ve covered the basics in last post. Let's talk about how to do production grade custom logging. In production, logs aren't for humans, they are for Log Aggregators like ELK, Splunk, or Datadog. Structured Logging (JSON): Plain text logs are hard to search. Spring Boot now supports Structured Logging out of the box. ->JSON allows you to filter by specific fields (e.g., userId or traceId) without complex regex. ->Simply set logging.structured.format.console=json in your properties. No extra libraries required! Custom XML Configurations: When you need "Log Rotation" or different patterns for different environments, use logback-spring.xml. -> Use <springProfile name="prod"> to ensure your production logs are concise while Dev stays verbose. -> Send logs to the console, files, and a remote socket simultaneously. Contextual Logging (MDC): Ever tried to find logs for a specific user request in a sea of data? Mapped Diagnostic Context (MDC) is your best friend. -> Store a correlation_Id in the MDC at the start of a request. -> Every log line triggered by that request will automatically include that ID, making debugging a breeze. Performance Matters... In high-traffic apps, logging can become a bottleneck. ->Use an AsyncAppender in your Logback config. It moves logging tasks to a separate thread so your main logic stays fast. ->Avoid String Concatenation: Use placeholders like log.info("User {} logged in", username) to avoid wasted memory. Feel free to add anything in the comments below. #Java #SpringBoot #SoftwareDevelopment #100DaysOfCode #Backend
To view or add a comment, sign in
-
-
Most developers say HashMap operations are O(1). That’s not always true. And this misunderstanding shows up in interviews and real systems. Let’s break down what actually happens inside a HashMap: Internally, HashMap uses an array of buckets: Node<K, V>[] table (default size = 16) Each bucket stores a structure like: class Node<K, V> { int hash; K key; V value; Node<K, V> next; } When you insert a key-value pair: 1.Hash value is calculated from the key 2.Bucket index is found using: hash % capacity 3.Entry is placed in that bucket Now the critical part: What if multiple keys map to the same bucket? 👉 Collision happens Entries are stored as a LinkedList using the next pointer If collisions increase → performance degrades Worst case: 👉 All keys land in same bucket → O(n) To handle this, Java introduced optimizations: Load Factor = 0.75 → When no. Entries > capacity * 0.75 → rehashing happens → Capacity doubles → reduces collisions Treeify Threshold = 8 → If a bucket has more than 8 entries → LinkedList converts into Red-Black Tree Now complexity improves: 👉 From O(n) → O(log n) So the reality is: Best case → O(1) Worst case → O(n) Optimized worst case → O(log n) Challenge: So what is Internal structure of LinkedHashMap ? Let’s see how you think about internal design 👇 #Java #BackendDevelopment #SoftwareEngineering #SystemDesign #HashMap #JavaDeveloper
To view or add a comment, sign in
-
-
Some problems are not just about algorithms — they test your ability to design efficient data structures. 🚀 Day 118/365 — DSA Challenge Solved: LRU Cache Problem idea: Design a cache that supports get and put in O(1), while removing the least recently used (LRU) item when capacity is exceeded. Efficient approach: Combine HashMap + Doubly Linked List. Steps: 1. Use HashMap to store key → node (O(1) access) 2. Use Doubly Linked List to maintain usage order 3. Most recently used → near head 4. Least recently used → near tail 5. On get/put: - Move node to front (mark as recently used) 6. If capacity exceeds: - Remove node from tail (LRU) This ensures all operations run in constant time. ⏱ Time: O(1) per operation 📦 Space: O(capacity) Day 118/365 complete. 💻 247 days to go. Code: https://lnkd.in/dad5sZfu #DSA #Java #LinkedList #HashMap #SystemDesign #LeetCode #LearningInPublic
To view or add a comment, sign in
-
🚀 How HashMaps Work Internally 🔍 1. Core Idea HashMap stores data as: 👉 Key → Value ⚙️ 2. Hashing the Key When you insert a key: • Hash function → converts key into integer • Index calculation: index = hash % array_size; 👉 Decides the bucket location 📦 3. Buckets Each index in array = bucket • Stores entries (key, value) • Can hold multiple entries in case of collisions ⚠️ 4. Collision Handling (Deep Dive) 👉 Collision = multiple keys map to same bucket 🔹 Why collisions happen? • Limited array size • Imperfect hash functions 🔹 How HashMap Handles It 1. Linked List (Basic Approach) Bucket → (k1,v1) → (k2,v2) → (k3,v3) • New entries appended • Search becomes linear in worst case 2. Tree Conversion (Java 8+) When collisions increase (threshold ~8): 👉 Linked list → Balanced Tree (Red-Black Tree) k2 / \ k1 k3 ✔️ Improves lookup from O(n) → O(log n) ✔️ Prevents performance degradation 🔹 Lookup Process in Collision 1. Find bucket via hash 2. Traverse list/tree 3. Match key using equals() 👉 Hash + equals both are critical 🔄 5. Rehashing (Deep Dive) HashMaps maintain a load factor (default ~0.75) 👉 When exceeded: • Array size is doubled • All entries are re-distributed Have you ever faced performance issues due to collisions? #SystemDesign #DataStructures #HashMap #BackendEngineering #Performance #Scalability
To view or add a comment, sign in
-
-
One fine morning, a customer reported: “File upload sometimes fails…” Not always. Not consistently. Just sometimes. 😄 And of course, those are the best bugs. 👉 System handles 1000+ uploads daily 👉 Issue happens randomly (10–20 times) 👉 Chunk upload + merge logic (unchanged for years) 👉 Stateless architecture (or so I thought…) I jumped into debugging mode. After hours of checking: NFS configs ✅ Multi-server behavior ✅ Retry logic ✅ Logs (100 times) ✅ Observation: Chunks uploaded from Server A were not visible on Server B immediately (10–15 sec delay). Confusion level: 🔥🔥🔥 Then I did something simple (and often ignored)… 👉 Compared old vs new code Guess what changed? Just one line removed (thanks to Sonar cleanup 😅): HttpSession session = request.getSession(); And that innocent line was silently adding JSESSIONID, making requests sticky and hiding the real problem all along. 💡 So for years, reality was something like this: Stateless system... except when upload API enters the chat 😄 Or simply: stateless most of the time, secretly stateful during uploads 🎭 And the moment I removed an “unused variable”… 💥 Load balancing started behaving correctly 💥 NFS delays became visible 💥 Hidden dependency got exposed 💥 Bug said: Hello 👋 I was always here And the best realization: 👉 My application is perfectly stateless… 👉 Until the user hits the upload API and boom, it becomes emotional (stateful) 🤣🤣🤣 Lesson learned: Sometimes the bug is not in new code… It’s in removing the wrong old code 😄 And sometimes… Your system isn’t broken, your assumptions are. Still one mystery remains: 👉 Why exactly NFS behaved that way (never got a perfect answer 😅) #BackendStories #ProductionIssues #Java #NFS
To view or add a comment, sign in
-
Thread pool types : 1. Fixed Thread Pool (This has a fixed number of threads.) Method: Executors.newFixedThreadPool(int n) :- internally LinkedBlockingQueue data Structure uses. Example:- steady load where you want to strictly limit resource usage. 2. Cached Thread Pool : (creates new threads as needed but reuses existing threads if available . If thread is idle for 60 seconds, it is terminated). Method: Executors.newScheduledThreadPool(int corePoolSize) :- internally SynchronousQueus data Structure uses. Example:- Applications with many short-lived asynchronous tasks. (Push Notifications & SMS Alerts) 3. Scheduled Thread Pool : This pool can schedule to run after a given delay or to execute periodically. Method: Executors.newScheduledThreadPool(int corePoolSize) Example: Background cleanup tasks, heartbeat signals, or polling. Example:- Tasks that must be processed one at a time in a specific order (e.g., event sequencing). 4. Single Thread Executor : single worker thread to execute all tasks. It guarantees that tasks are executed sequentially Method: Executors.newSingleThreadExecutor() :- internally LinkedBlockingQueue data Structure uses. (Ledger Accounting) #Java #BackendDevelopment #SoftwareEngineering #MultiThreading #Concurrency #JavaPerformance #CodingTips #Programming #SystemDesign
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Great comparison! HashMap works well in single-threaded scenarios, but ConcurrentHashMap is a better choice when thread safety is required. Its ability to handle concurrent reads and writes efficiently makes it very useful in real-world applications.