🚀 Day 47 of #100DaysOfCode – LeetCode Problem #1013: Partition Array Into Three Parts With Equal Sum 💬 Problem Summary: Given an array of integers, determine whether it can be split into three non-empty parts such that: Each part has the same sum, and The partitions appear in order (left → mid → right). Formally, we need to find indices i + 1 < j such that: sum(arr[0..i]) == sum(arr[i+1..j-1]) == sum(arr[j..end]) 🧩 Examples: Input: [0,2,1,-6,6,-7,9,1,2,0,1] Output: true ✔️ Three parts sum to 3 each. Input: [0,2,1,-6,6,7,9,-1,2,0,1] Output: false 🧠 Logic: This challenge focuses on finding equal prefix sums: 1️⃣ Compute total sum of the array. 2️⃣ If total sum is not divisible by 3 → ❌ impossible. 3️⃣ Walk through the array, accumulating values. 4️⃣ Each time a segment equals target = totalSum / 3, increase the partition count. 5️⃣ If we find 2 such partitions before the last index → array can be divided. 💻 Java Solution: class Solution { public boolean canThreePartsEqualSum(int[] arr) { int total = 0; for(int num : arr) total += num; if(total % 3 != 0) return false; int target = total / 3; int sum = 0, count = 0; for(int i = 0; i < arr.length; i++) { sum += arr[i]; if(sum == target) { count++; sum = 0; if(count == 2 && i < arr.length - 1) return true; } } return false; } } ⚙️ Complexity: Time: O(n) Space: O(1) ✅ Result: Accepted (Runtime: 0 ms) 💬 Takeaway: A great pattern-recognition problem—once you realize that only two valid partitions are needed (the third is implied), the solution becomes clean and efficient.
"LeetCode 1013: Can Array Be Split into Three Equal Parts?"
More Relevant Posts
-
🧠Array Memory Model & Random Access Mechanism ✅ An array is an object and is always stored in heap memory. 🔑 Contiguous Memory Allocation JVM looks for a continuous block of memory Size required = (size of data type × number of elements) Example: int[] arr = new int[5]; int = 4 bytes Total = 5 × 4 = 20 bytes JVM allocates 20 continuous bytes in heap 3️⃣ Address vs Index vs Value (VERY IMPORTANT) They are three different things 👇 Address ----- Actual memory location (hidden from Java developer) IndexPosition ----- number (0, 1, 2, …) Value ----- Data stored at that position 📌 You never see addresses in Java 📌 You only work with indexes 4️⃣ How does JVM access arr[i] so fast? This is the key concept you’re describing 👇 🔑 Direct Address Calculation (Random Access) Internally JVM does something like: address = baseAddress + (index × sizeOfDataType) Example: arr[3] JVM knows base address of arr Knows int = 4 bytes Calculates: base + (3 × 4) 👉 No loop, no search, no traversal 👉 Direct jump to memory location 📌 This is why arrays are very fast for reading 5️⃣ Why arrays are fast for READ but slow for WRITE? ✅ Fast Read Because of random access JVM jumps directly to the address ❌ Slow Insert / Delete Size is fixed To insert in middle: Shift elements Adjust values Cannot resize memory 📌 This is why arrays are rigid 6️⃣ Can we insert into an array? ❌ No. Arrays are fixed size Once created: new int[5]; Size = 5 forever JVM cannot resize that memory block 👉 Any “insertion” means: Create a new array Copy old values Add new value 📌 This is expensive 🧠 Java Memory — Objects & References (Quick Clarity) 👉 Objects (including arrays) are stored in the Heap 🧱 👉 Reference variables are stored in the Stack 📍 👉 The reference variable stores the base address of the object, ✔️ not the value ✔️ not index 0 ✔️ but the starting address of the array object itself 👉 Inside memory, each address holds both location + actual value 📦 🔖 Frontlines EduTech (FLM) #Java #JVM #HeapMemory #StackMemory #ProgrammingConcepts #JavaInternals
To view or add a comment, sign in
-
-
How HashMap resizing silently kills performance in Java That innocent HashMap that slowed down production! Imagine this code: Map<String, User> users = new HashMap<>(); for (User user: incomingUsers) { users.put(user.getId(), user); } Looks harmless, right? Now what actually happens behind the scenes. Step 1: HashMap starts small Default capacity = 16 ---------------------------------------- Step 2: You keep adding entries At 12 entries (16 x 0.75), the map says: "Time to grow" ---------------------------------------- Step 3: Resize kicks in A new array is created All existing entries are rehashed Buckets are redistributed You didn't write this code. But the JVM just did a lot of work for you. ---------------------------------------- Step 4: Repeat under load Add thousands of entries → multiple resizes → CPU spikes → latency jumps The silent killer This happens without warnings. No exceptions. No logs. The fix (simple but powerful) Map<String, User> users = new HashMap<>(expectedSize); OR new HashMap<>(capacity, loadFactor); Takeaway: HashMap is fast until resizing becomes frequent. Performance bugs often come from defaults we never questioned. #Java #HashMap #PerformanceEngineering #BackendDevelopment
To view or add a comment, sign in
-
You can work with Java data structures for a long time and still mix these two up. Arrays and ArrayLists may look similar, but they are designed for different use cases. Arrays have a fixed size. Once created, their length cannot change. ArrayLists are dynamic. They grow and shrink as your data changes. If this difference isn’t clear, your code either becomes rigid or unnecessarily complex. Once you understand when to use Arrays and when to switch to ArrayList, your Java code becomes much cleaner. Save this post. It will help you choose the right data structure every time. 💾 #Java #CoreJava #JavaBasics #Arrays #ArrayList #JavaCollections #DataStructures #LearnJava #StudentDeveloper
To view or add a comment, sign in
-
-
Ever spent hours debugging why your Hashmap.get(key) returns null, even though you know you added that key? The answer almost always lies in one place: violating the equals() and hashCode() contract. In Java, these two methods are fundamentally linked. If you override one, you must override the other. The Golden Rule: If two objects are equal (as determined by your custom equals() method comparing their content/fields), then calling hashCode() on both objects must return the exact same integer value. Why it matters for Collections: Hash-based collections (HashMap, HashSet) use hashCode() to find the correct internal "bucket" (index) where the object is stored. 1. Putting Data: When you put(key, value), Java gets the hashcode() of the key to know where to put it. 2. Getting Data: When you get(key), Java gets the hashcode() of the lookup key and goes straight to that bucket. If your equals() is correct but your HashCode() is wrong, the object goes into one bucket during the put operation, but when you try to get it, the key generates a different hash code, sending the lookup to the wrong bucket. The object is lost. Best Practices for Correct Implementation: 1. Use the Same Fields: Ensure both the equals() and hashcode() methods use the exact same set of fields to compute their values. Typically, these are the fields that define the object's logical identity (e.g., id, name). 2. Null-Safe Comparison: Always use the modern utility method java.util.Objects.hash(field1, field2, ...) to implement hashCode(). It automatically handles null checks and uses a prime number multiplier for better distribution, minimizing collisions. 3. Check for Type and Null: Inside your equals() method, always check for reference equality (this == o), null (o == null), and type equivalence (getClass() != o.getClass()). #Java #SoftwareDevelopment #BackendEngineering #Hashmap #Data Structures
To view or add a comment, sign in
-
Problem: Given an integer array nums sorted in non-decreasing order, remove the duplicates in-place such that each unique element appears only once. The relative order of the elements should be kept the same. Consider the number of unique elements in nums to be k. After removing duplicates, return the number of unique elements k. The first k elements of nums should contain the unique numbers in sorted order. The remaining elements beyond index k - 1 can be ignored. Idea: Given that array is sorted in increasing order and to main the relative order of array, All we need to do is that take two pointers adjacent pointer at start where compare left to right using second pointer with first pointer and check if elements at first and second aren't same, then increase first pointer and replace element of 1st pointer with second pointer and return first +1; class Main { public static void main(String[] args) { System.out.println("Try programiz.pro"); int[] nums={0,0,1,2,3,3,3,3,5,6}; int unqiueCount = removeDuplicates(nums); System.out.println("Total unqiue element after removing duplicates are: "+ unqiueCount); } static int removeDuplicates(int[] nums) { int first = 0; for(int second = 1; second<nums.length; second ++){ if(nums[first]!=nums[second]){ first++; nums[first]=nums[second]; } } return first+1; } } #java8 #corejava #programming #practice #java #code #leetcodeproblem #problemsolving
To view or add a comment, sign in
-
-
Have you ever found yourself writing long loops to filter, map, or sort data in Java? Meet the 𝗝𝗔𝗩𝗔 𝗦𝗧𝗥𝗘𝗔𝗠 𝗔𝗣𝗜 a powerful abstraction introduced in Java 8 that lets you process collections declaratively and efficiently. 🧩 What is Stream API? The Stream API provides a clean, functional approach to working with collections like List, Set, etc., allowing operations like: ≫Filtering ≫Mapping (transforming) ≫Sorting ≫Aggregating (like sum, count, average) All without mutating the original data structure. 🔄 How It Works — 3 Building Blocks of a Stream 1.Source Where the stream comes from — like a List, Set, or even an array. eg. List<Integer> numbers = Arrays.asList(1, 2, 3, 4); 2.Intermediate Operations These are lazy operations that transform the data. Examples: filter(), map(), sorted() 3.Terminal Operation This triggers the execution and returns a result or a side-effect. Examples: collect(), forEach(), count() ▶Example: List<String> activeUserEmails = users.stream() .filter(User::isActive) .map(User::getEmail) .sorted() .collect(Collectors.toList()); #Java #StreamAPI #FunctionalProgramming #BackendDevelopment #JavaTips #CleanCode #SoftwareEngineering #CodingBestPractices #SoftwareEngineer
To view or add a comment, sign in
-
𝗪𝗵𝘆 𝗦𝗽𝗿𝗶𝗻𝗴’𝘀 𝗗𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝗰𝘆 𝗜𝗻𝗷𝗲𝗰𝘁𝗶𝗼𝗻 𝗜𝘀 𝗦𝗺𝗮𝗿𝘁𝗲𝗿 𝗧𝗵𝗮𝗻 𝗬𝗼𝘂 𝗧𝗵𝗶𝗻𝗸: 𝗧𝗵𝗲 𝗣𝗼𝘄𝗲𝗿 𝗼𝗳 𝗚𝗲𝗻𝗲𝗿𝗶𝗰𝘀 🧬 In complex Spring applications, managing multiple beans of the same generic interface can get tricky. One often overlooked feature of the Spring Core container is its ability to resolve dependencies based on 𝗴𝗲𝗻𝗲𝗿𝗶𝗰 𝘁𝘆𝗽𝗲 𝗶𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻. 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: When you have a generic interface (e.g., Processor<T>) and multiple implementations (Processor<Order>, Processor<Invoice>), developers often wrongly assume they need string-based @Qualifier annotations to distinguish them. 𝗧𝗵𝗲 𝗦𝗽𝗿𝗶𝗻𝗴 𝗪𝗮𝘆: Spring can use generic type metadata to disambiguate beans natively. It creates a specific match based on the type argument. 𝗪𝗵𝘆 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: Even though Java has type erasure, Spring preserves the generic type signature in the bean definition, allowing precise injection without ambiguity. 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀: 1. 𝗧𝘆𝗽𝗲 𝗦𝗮𝗳𝗲𝘁𝘆 – No fragile string-based qualifiers. 2. 𝗖𝗹𝗲𝗮𝗻𝗲𝗿 𝗖𝗼𝗱𝗲 – Leverages the type system rather than magical strings. 3. 𝗕𝗲𝘁𝘁𝗲𝗿 𝗗𝗲𝘀𝗶𝗴𝗻 – This is the magic behind how Spring Data repositories work internally! It’s not Spring Boot magic — it’s 𝗦𝗽𝗿𝗶𝗻𝗴 𝗖𝗼𝗿𝗲 doing smart type resolution. #SpringBoot #DependencyInjection #Java #CodingBestPractices #BackendDevelopment
To view or add a comment, sign in
-
-
☕ 𝗛𝗼𝘄 𝗝𝗮𝘃𝗮 𝗖𝗼𝗺𝗽𝗶𝗹𝗲𝘀 𝗦𝗼𝘂𝗿𝗰𝗲 𝗖𝗼𝗱𝗲 𝘁𝗼 𝗕𝘆𝘁𝗲𝗰𝗼𝗱𝗲 Ever wondered what happens after you write Java code? Let’s break it down step by step 👇 🧑💻 𝗔𝘁 𝗖𝗼𝗺𝗽𝗶𝗹𝗲 𝗧𝗶𝗺𝗲 1️⃣ 𝗪𝗿𝗶𝘁𝗲 𝗖𝗼𝗱𝗲 You write Java code in a .java file using classes, methods, and objects. 2️⃣ 𝗟𝗲𝘅𝗶𝗰𝗮𝗹 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 The compiler (javac) scans the source and converts it into tokens ➡️ keywords, identifiers, literals, symbols. 3️⃣ 𝗦𝘆𝗻𝘁𝗮𝘅 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 Checks if the code follows Java grammar rules and builds a Parse Tree 🌳 4️⃣ 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀 Validates data types, variable declarations, and rule correctness ➡️ catches type mismatches and invalid references. 5️⃣ 𝗕𝘆𝘁𝗲𝗰𝗼𝗱𝗲 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 Generates platform-independent bytecode stored in a .class file. 6️⃣ 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 Applies basic optimizations to improve execution efficiency ⚡ ⚙️ 𝗔𝘁 𝗥𝘂𝗻𝘁𝗶𝗺𝗲 (𝗜𝗻𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝗝𝗩𝗠) 7️⃣ 𝗖𝗹𝗮𝘀𝘀 𝗟𝗼𝗮𝗱𝗲𝗿 Loads .class files into memory. 8️⃣ 𝗕𝘆𝘁𝗲𝗰𝗼𝗱𝗲 𝗩𝗲𝗿𝗶𝗳𝗶𝗲𝗿 Ensures safety and prevents illegal or malicious operations 🔒 9️⃣ 𝗜𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗲𝗿 / 𝗝𝗜𝗧 𝗖𝗼𝗺𝗽𝗶𝗹𝗲𝗿 Converts bytecode into native machine code ➡️ JIT boosts performance by compiling hot code paths 🚀 ✅ 𝗧𝗵𝗲 𝗥𝗲𝘀𝘂𝗹𝘁 ✔️ Platform independence ✔️ Secure execution ✔️ Automatic memory management ✔️ Runtime performance optimization 𝗪𝗿𝗶𝘁𝗲 𝗼𝗻𝗰𝗲, 𝗿𝘂𝗻 𝗮𝗻𝘆𝘄𝗵𝗲𝗿𝗲 isn’t magic — it’s the JVM at work ☕💡 Which part of the Java compilation process did you first learn about? 👇 #Java #JVM #Bytecode #JavaInternals #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
-
Problem 2: Binary Tree Paths Level: Easy–Medium Platform: LeetCode 257 Return all root-to-leaf paths as strings like: Copy code "1->2->5" "1->3" 🔹 Java Code import java.util.*; class Solution { public List<String> binaryTreePaths(TreeNode root) { List<String> result = new ArrayList<>(); if (root == null) return result; dfs(root, "", result); return result; } private void dfs(TreeNode node, String path, List<String> result) { if (node == null) return; // add current node to path path = path + node.val; // leaf -> store path if (node.left == null && node.right == null) { result.add(path); return; } // continue path with "->" path = path + "->"; dfs(node.left, path, result); dfs(node.right, path, result); } } 🔹 Explanation Build path as a string while traversing When leaf reached → store full path No backtracking needed because we use new string each recursion call Time: O(n × path-length) Space: O(h) recursion #DSA #DataStructuresAndAlgorithms #Coding #Programmer #LeetCode #CodeEveryday #JavaDSA #CodingPractice #ProblemSolving #CP #CompetitiveProgramming #DailyCoding #TechJourney #CodingCommunity #DeveloperLife #100DaysOfCode #CodeWithMe #LearnToCode #GeekForGeeks #CodingMotivation
To view or add a comment, sign in
-
Serialization in Java: a concept that’s more practical than it seems 🧩 This week I worked with data persistence using object serialization, and wanted to share some knowledge about it Serialization = converting an object into a stream of bytes so it can be stored or transferred Deserialization = reconstructing that object back in memory later Why does this matter? 🤔 Because objects only exist while the application is running. If the program closes, everything in RAM is gone. 💀 By serializing, we can: • store entire objects (not just text) • persist collections like ArrayList<Persona> • reload application state later via deserialization • avoid manual parsing or intermediate formats • create lightweight data storage when full DBs are unnecessary 🎯 Example workflow: Object → serialize → file.dat → deserialize → object restored It’s a straightforward mechanism that solves a very real problem: keeping state between executions without a database. Practical, simple, and surprisingly powerful I’d love to hear your thoughts or experiences using serialization! 👇 #Java #SoftwareEngineering #Serialization #OOP #DataPersistence #BackendDevelopment
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development