🚀 Day 17 – equals() and hashCode(): A Crucial Contract Today I explored why "equals()" and "hashCode()" are so important—especially when using collections like "HashMap" or "HashSet". --- 👉 By default: - "equals()" → compares object references - "hashCode()" → generates a hash based on memory location But in real applications, we override them. --- 💡 The contract I learned: ✔ If two objects are equal using "equals()", they must have the same "hashCode()" --- ⚠️ What happens if we break this? - "HashMap" may fail to retrieve values - Duplicate entries may appear in "HashSet" - Leads to very tricky bugs --- 👉 Example scenario: Two objects look identical (same data), but: - "equals()" returns true - "hashCode()" is different 👉 Result: Collections treat them as different objects 😬 --- 💡 Real takeaway: Whenever overriding "equals()", always override "hashCode()" properly. This is not just theory—it directly impacts how collections behave internally. #Java #BackendDevelopment #HashMap #JavaInternals #LearningInPublic
Why equals() and hashCode() Matter in Java Collections
More Relevant Posts
-
I've used HashMap in every Spring Boot service I've written for 1 year. I was reading an article on DEV Community about HashMap vs ConcurrentHashMap. It made me realize 3 situations where HashMap was quietly the wrong choice in my own code. Nobody talks about this. Every post explains how HashMap works. Nobody explains when to stop using it. Here's what I learned: Situation 1 — Multiple threads accessing the same map HashMap is not thread safe. Period. In a Spring Boot service, your beans are singletons by default. If two threads hit the same endpoint simultaneously and both modify a shared HashMap — you get data corruption, infinite loops during rehashing, or silent data loss. No exception. No warning. Just wrong data in production. What to use instead → ConcurrentHashMap It locks at bucket level, not the entire map. Reads are completely lock free. Writes only lock the specific bucket being modified. Situation 2 — You need atomic check then act operations This is the one most developers miss completely. // This looks safe. It is not. if (!map.containsKey(key)) { map.put(key, value); } Two threads can both pass the containsKey check simultaneously and both execute put — overwriting each other's data. What to use instead → ConcurrentHashMap.computeIfAbsent() map.computeIfAbsent(key, k -> value); One atomic operation. Zero race conditions. Situation 3 — High frequency reads with occasional writes For scenarios with 90% reads and 10% writes — like a configuration cache or reference data store — even ConcurrentHashMap's bucket locking adds unnecessary overhead. What to use instead → ReadWriteLock with HashMap or a purpose built cache like Caffeine. Reads acquire shared lock simultaneously. Writes acquire exclusive lock only when updating. Most production bugs don't come from using the wrong algorithm. They come from using the right data structure in the wrong environment. Have you ever hit a production issue caused by HashMap in a multithreaded context? #Java #SpringBoot #HashMap #ConcurrentHashMap #BackendDevelopment #SoftwareEngineering #DSA #FAANG #Multithreading #JavaCollections
To view or add a comment, sign in
-
🚀 Day 59: Diving into Arrays — The Foundation of Data Structures 📊 Today, I shifted my focus from OOP design back to the core of data handling in Java: Arrays. Understanding how to store and manage collections of data efficiently is where the real logic begins! 1. What is an Array? An array is a fixed-size, contiguous block of memory that stores multiple elements of the same data type. It’s the simplest way to group related data (like a list of 100 integers) under a single variable name. 2. Ways to Declare an Array 📝 I learned that Java gives us flexibility in how we set them up: ▫️ Declaration & Instantiation: int[] numbers = new int[5]; (Creating an empty "shelf" with 5 slots). ▫️ Inline Initialization: int[] numbers = {10, 20, 30, 40}; (Creating the shelf and filling it at the same time). 3. Accessing & Assigning Values 🔑 The Index Rule: Java arrays are zero-indexed, meaning the first element is at index 0. ▫️ Assigning: Use the index to target a specific slot: numbers[0] = 99; ▫️ Accessing: Retrieve the data just as easily: System.out.println(numbers[0]); 💡 My Key Takeaway: The biggest "catch" with arrays is that they are fixed in size. Once you define an array of 5, you can't suddenly make it 6. This makes them incredibly fast for memory access but requires careful planning during the design phase! Question for the Developers: We all start with Arrays, but at what point in your projects do you usually decide to switch to an ArrayList? Is it always about the dynamic size, or are there other factors? 👇 #Java #DataStructures #Arrays #100DaysOfCode #BackendDevelopment #CodingFundamentals #CleanCode #LearningInPublic #JavaDeveloper 10000 Coders Meghana M
To view or add a comment, sign in
-
Problem :- Two Sum (LeetCode 1) Problem Statement :- Given an array of integers nums and an integer target, return indices of the two numbers such that they add up to the target. Assume exactly one solution exists, and you may not use the same element twice. Approach 1 :- Brute Force => Nested Loop i - Check every pair of elements ii - If nums[i] + nums[j] == target => return indices iii - Time Complexity : O(n²) class Solution { public int[] twoSum(int[] nums, int target) { for (int i = 0; i < nums.length; i++) { for (int j = i + 1; j < nums.length; j++) { if (nums[i] + nums[j] == target) { return new int[]{i, j}; } } } return new int[]{}; } } Approach 2 :- Optimal => HashMap i - Store number and its index in a HashMap ii - For each element, check if (target - current) exists iii - Time Complexity : O(n) class Solution { public int[] twoSum(int[] nums, int target) { HashMap<Integer, Integer> map = new HashMap<>(); for(int i = 0; i < nums.length; i++) { int complement = target - nums[i]; if (map.containsKey(complement)) { return new int[]{map.get(complement), i}; } map.put(nums[i], i); } return new int[]{}; } } Key Takeaway :- Instead of checking every pair, we store previously seen elements and directly find the required complement efficiently. #Java #DSA #LeetCode #CodingJourney #LearnInPublic #SoftwareEngineering #HashMap
To view or add a comment, sign in
-
-
Day 95 of #365DaysOfLeetCode Challenge Today’s problem: **Path Sum III (LeetCode 437)** This one looks like a typical tree problem… but the optimal solution introduces a powerful concept: **Prefix Sum + HashMap in Trees** 💡 **Core Idea:** Instead of checking every possible path (which would be slow), we track **running sums** as we traverse the tree. 👉 If at any point: `currentSum - targetSum` exists in our map → we’ve found a valid path! 📌 **Approach:** * Use DFS to traverse the tree * Maintain a running `currSum` * Store prefix sums in a HashMap * Check how many times `(currSum - targetSum)` has appeared * Backtrack to maintain correct state ⚡ **Time Complexity:** O(n) ⚡ **Space Complexity:** O(n) **What I learned today:** Prefix Sum isn’t just for arrays — it can be **beautifully extended to trees**. This problem completely changed how I look at tree path problems: 👉 From brute-force traversal → to optimized prefix tracking 💭 **Key takeaway:** When a problem involves “subarray/paths with a given sum,” think: ➡️ Prefix Sum + HashMap #LeetCode #DSA #BinaryTree #PrefixSum #CodingChallenge #ProblemSolving #Java #TechJourney #Consistency
To view or add a comment, sign in
-
-
Why using a built-in HashMap is often more efficient than creating an array of max size 👇 Many beginners solve frequency/counting problems by creating an array with the maximum possible size. It works sometimes—but it’s not always the smartest choice. ✅ Why HashMap is better: 1️⃣ Memory Efficient If values are sparse (like keys = 2, 1000, 50000), an array wastes huge memory. HashMap stores only the keys that actually exist. 2️⃣ Dynamic Size No need to guess max range in advance. HashMap grows as data grows. 3️⃣ Faster for Real-World Data With average O(1) insert/search/delete, HashMap is highly optimized internally. 4️⃣ Handles Any Key Type Arrays need integer indexes. HashMap can use strings, objects, IDs, etc. 5️⃣ Cleaner Logic Instead of managing ranges and unused spaces, you focus directly on key-value mapping. 📌 Example: Need frequency of numbers [100, 5000, 100000]? Array → need huge size HashMap → stores only 3 keys 💡 Arrays are still great when range is small and continuous. But when data is sparse or unknown, HashMap wins. What would you choose in coding interviews: Array or HashMap? #DataStructures #HashMap #CodingInterview #Programming #JavaScript #Cpp #Java #DSA #SoftwareEngineering #Developers #Tech #LearningToCode #CompetitiveProgramming #CodingTips
To view or add a comment, sign in
-
#100DaysOfCode | Day 2 of my LeetCode challenge. Today’s problem: 1365. How Many Numbers Are Smaller Than the Current Number. While the problem seems simple, it’s a perfect example of how choosing the right data structure can drastically change performance. Here is how I broke it down: 1. The Brute Force Approach The most simple and easy way is to use nested loops to compare every number with every other number. Logic: For each element, loop through the entire array and count smaller values. Time Complexity: O(N^2) Space Complexity: O(N) (to store the result) 2. The Sorting + HashMap Approach A more scalable way is to sort the numbers. In a sorted array, a number's index is equal to the count of numbers smaller than it. Logic: Clone the array, sort it, and store the first occurrence of each number in a HashMap. Time Complexity: O(N log N) (due to sorting) Space Complexity: O(N) (to store the map) Use - Works for any range of numbers (including very large or negative ones). 3. The Frequency Array (Counting Sort Logic) Since the problem constraints were small (0 to 100), this is the most optimized solution. Logic: Count the frequency of each number using an array of size 101, then calculate a running prefix sum. Time Complexity: O(N) (Linear time) Space Complexity: O(1) (The frequency array size is constant) #LeetCode #100DaysOfCode #Java #SoftwareEngineering #DataStructures #Algorithms
To view or add a comment, sign in
-
💡 𝐉𝐚𝐯𝐚 𝐒𝐭𝐫𝐞𝐚𝐦𝐬: 𝐦𝐨𝐫𝐞 𝐭𝐡𝐚𝐧 𝐣𝐮𝐬𝐭 𝐟𝐚𝐧𝐜𝐲 𝐥𝐨𝐨𝐩𝐬 If you're still using for loops everywhere, you're probably leaving readability (and sometimes performance) on the table. Java Streams bring a declarative approach to data processing — you describe what you want, not how to iterate. 🔹 𝐇𝐨𝐰 𝐢𝐭 𝐰𝐨𝐫𝐤𝐬 Streams process data in a pipeline: Source → Collection, array, etc. Intermediate ops → map, filter, sorted Terminal ops → collect, forEach, reduce 🔹 𝐄𝐱𝐚𝐦𝐩𝐥𝐞 List<String> names = List.of("Ana", "Bruno", "Carlos", "Amanda"); List<String> result = names.stream() .filter(name -> name.startsWith("A")) .map(String::toUpperCase) .sorted() .toList(); 🔹 𝐊𝐞𝐲 𝐦𝐞𝐭𝐡𝐨𝐝𝐬 filter() → select data map() → transform data flatMap() → flatten nested structures reduce() → aggregate values collect() → build results 🔹 𝐖𝐡𝐲 𝐢𝐭 𝐦𝐚𝐭𝐭𝐞𝐫𝐬 ✔ Cleaner and more expressive code ✔ Easy parallelization with .parallelStream() ✔ Encourages immutability and functional style ⚠️ 𝐁𝐮𝐭 𝐛𝐞𝐰𝐚𝐫𝐞: Streams are powerful — not always faster. Overusing them in hot paths can hurt performance. 👉 𝐑𝐮𝐥𝐞 𝐨𝐟 𝐭𝐡𝐮𝐦𝐛: 𝐔𝐬𝐞 𝐒𝐭𝐫𝐞𝐚𝐦𝐬 𝐟𝐨𝐫 𝐜𝐥𝐚𝐫𝐢𝐭𝐲 𝐟𝐢𝐫𝐬𝐭, 𝐨𝐩𝐭𝐢𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 𝐥𝐚𝐭𝐞𝐫. #Java #SoftwareEngineering #CleanCode #TechTips #Backend
To view or add a comment, sign in
-
🚀 Day 5 – What Really Happens Inside a HashMap? We use "HashMap" almost everywhere, but its internal working is quite interesting. Map<String, Integer> map = new HashMap<>(); map.put("key", 1); What happens internally? 👉 Step 1: "hashCode()" is calculated for the key 👉 Step 2: Hash is converted into an index (bucket location) 👉 Step 3: Value is stored in that bucket 💡 But what if two keys land in the same bucket? ✔ This is called a collision ✔ Java handles it using a Linked List (and converts to a Tree if it grows large) ⚠️ Important: - Retrieval ("get") again uses "hashCode()" + ".equals()" - So both must be properly implemented for custom objects 💡 Real takeaway: Good hashing = better performance Poor hashing = more collisions = slower operations This is why "HashMap" is fast on average (O(1)), but can degrade if not used properly. #Java #BackendDevelopment #HashMap #JavaInternals #LearningInPublic
To view or add a comment, sign in
-
Day 25 of my #30DayCodeChallenge: The Art of Categorization! The Problem: Group Anagrams. Given an array of strings, group the words that are rearrangements of each other. The Logic: This problem is a perfect example of using Hashing and Canonical Forms to organize unstructured data efficiently. 1. Identifying the "Signature": The core challenge is realizing that all anagrams, when sorted alphabetically, become the exact same string. I used this "sorted version" as a unique key (the signature) for each group. 2. The Hash Map Strategy: I utilized a HashMap<String, List<String>>. Key: The sorted version of the word. Value: A list of all original words that match that sorted key. 3. Efficient Lookups: Using computeIfAbsent, I streamlined the process of initializing lists and adding words in a single pass. This keeps the code clean and the logic tight. Complexity Analysis: Time Complexity: O(N Klog K), where N is the number of strings and K is the maximum length of a string (due to sorting). Space Complexity: O(N K) to store the grouped strings in our map. One step closer to mastery. Onward to Day 26! #Java #Algorithms #DataStructures #Hashing #ProblemSolving #150DaysOfCode #SoftwareEngineering
To view or add a comment, sign in
-
-
=== When to use @BatchSize vs JOIN FETCH in Spring Data JPA ==== You don't always need the collection ---> @BatchSize (stays lazy) You're paginating the parent list---> @BatchSize (JOIN FETCH breaks pagination) You always need the collection ---> JOIN FETCH (one query, most efficient) Multiple collections on one entity ---> @BatchSize (JOIN FETCH causes a cartesian explosion) #SpringBoot #SpringDataJPA #Java #AI #SpringAI
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development