𝗗𝗮𝘆 𝟰𝟳/𝟭𝟬𝟬 | 𝗗𝗲𝗹𝗲𝘁𝗲 𝗡𝗼𝗱𝗲𝘀 𝗙𝗿𝗼𝗺 𝗟𝗶𝗻𝗸𝗲𝗱 𝗟𝗶𝘀𝘁 𝗣𝗿𝗲𝘀𝗲𝗻𝘁 𝗶𝗻 𝗔𝗿𝗿𝗮𝘆 Day 47 ✅ — Hash set meets linked list. 𝗧𝗼𝗱𝗮𝘆'𝘀 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: ✅ 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 #𝟯𝟮𝟭𝟳: Delete Nodes From Linked List Present in Array (Medium) 𝗪𝗵𝗮𝘁 𝗖𝗹𝗶𝗰𝗸𝗲𝗱: Remove all nodes whose values appear in a given array. Simple concept, but the implementation requires combining two data structures efficiently. The key? Convert array to 𝗛𝗮𝘀𝗵 𝗦𝗲𝘁 for O(1) lookups. Then traverse the linked list with dummy node pattern, checking each node against the set. Eighteen days of linked list practice means the traversal logic is automatic. My focus was purely on optimization—hash set instead of repeated array searching. 𝗠𝘆 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵: 👉 Convert nums array to HashSet 👉 Use dummy node for clean edge case handling 👉 Traverse list with prev and curr pointers 👉 If curr.val in set, skip node 👉 Otherwise, move forward Time: O(n + m), Space: O(m) where m = array size 𝗠𝘆 𝗥𝗲𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻: Eighteen linked list problems. The dummy node pattern has appeared so many times it's muscle memory. When fundamentals are solid, you focus on what actually matters—choosing the right data structure. Hash sets turn O(n×m) solutions into O(n+m). That's the difference between passing and timing out. 𝗖𝗼𝗱𝗲:🔗 https://lnkd.in/gFMUU6bn 𝗗𝗮𝘆 𝟰𝟳/𝟭𝟬𝟬 ✅ | 𝟱𝟯 𝗺𝗼𝗿𝗲 𝘁𝗼 𝗴𝗼! #100DaysOfCode #LeetCode #LinkedList #HashSet #DataStructures #CodingInterview #SoftwareEngineer #Java #Algorithms #TimeComplexity #Programming #Optimization
More Relevant Posts
-
𝐃𝐚𝐲 𝟏𝟓 – 𝐃𝐒𝐀 𝐉𝐨𝐮𝐫𝐧𝐞𝐲 | 𝐀𝐫𝐫𝐚𝐲𝐬 🚀 Today’s problem focused on grouping logic using arrays + hashing concepts 🗂️ and understanding different ways to build unique keys. 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐒𝐨𝐥𝐯𝐞𝐝 • 🔤 Group Anagrams 🔹 𝐆𝐫𝐨𝐮𝐩 𝐀𝐧𝐚𝐠𝐫𝐚𝐦𝐬 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡 𝟏 – 𝐒𝐨𝐫𝐭𝐢𝐧𝐠 𝐁𝐚𝐬𝐞𝐝 • 🔄 Converted each word into a char array • 📊 Sorted the characters • 🔑 Used the sorted string as a key in HashMap • 📦 Grouped words sharing the same sorted key 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡 𝟐 – 𝐅𝐫𝐞𝐪𝐮𝐞𝐧𝐜𝐲 𝐀𝐫𝐫𝐚𝐲 𝐁𝐚𝐬𝐞𝐝 • 🔢 Created a frequency array of size 26 • 🧮 Built a unique key using character counts • ⚡ Used computeIfAbsent for cleaner insertion • 📌 Avoided sorting for better efficiency 𝐊𝐞𝐲 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠𝐬 • 🧠 Anagrams share identical character distributions • 🔑 The way you build the key defines performance • ⚡ Frequency-array approach avoids O(k log k) sorting • 📊 Combining arrays with hashing improves efficiency 🧠 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲 Even in array problems, sometimes the real power lies in how you represent the data 🔑 15 days consistent 🔥 On to Day 16 🚀 #DSA #Arrays #Strings #LeetCode #Java #ProblemSolving #DailyCoding #LearningInPublic #SoftwareDeveloper
To view or add a comment, sign in
-
-
Day 3 — DSA Practice Missed posting yesterday, but the consistency is still going strong. Solved 2 problems: • Two Sum II (Sorted Array)-167 Solved it in two ways: -Using HashMap -Using the Two Pointer approach It was interesting to compare both approaches. While HashMap works well, since the array is already sorted, the two-pointer method is cleaner and more space efficient. Time Complexity: -HashMap approach → O(n) time, O(n) space -Two Pointer approach → O(n) time, O(1) space Two pointers clearly felt more optimal here. • Number of Steps to Reduce a Number (Binary Representation) to One-1404 Initially, I tried converting the binary string using parseInt() and parseLong(), but it kept throwing overflow errors for large inputs. That made me realize that converting large binary strings directly isn’t always safe. Instead, I processed the string directly using: s.charAt(i) - '0' This avoids overflow and lets us simulate the operations efficiently. Time Complexity: -O(n), where n is the length of the binary string -O(1) extra space Key Takeaways: -Always consider constraints before choosing data types. -Sorted array problems often hint toward two-pointer solutions. -Small concepts (like converting char to int properly) can make a big difference. Learning > Just solving. Day 3 done. ✅ #DSA #Java #ProblemSolving #LearningInPublic #PlacementPreparation #Consistency
To view or add a comment, sign in
-
While practicing Quick Sort further, I looked into the issue of pivot selection. In the basic implementation, choosing a fixed pivot (like the first element) can sometimes lead to very unbalanced partitions, especially if the array is already sorted or nearly sorted. In such cases, the time complexity can degrade to O(n²). Approach used : Instead of always choosing the same pivot position, the pivot can be selected from a different position (like the middle element or a random index) before performing the partition. The rest of the algorithm remains the same : - choose a pivot - move it to its correct position using partition logic - recursively sort the left and right parts Core Quick Sort logic : static void quickSort(int[] arr, int lo, int hi) { if (lo >= hi) return; int idx = partition(arr, lo, hi); quickSort(arr, lo, idx - 1); quickSort(arr, idx + 1, hi); } Things that became clear : - the pivot choice strongly affects performance - randomized or varied pivot selection helps avoid worst-case patterns - average time complexity remains O(n log n) - recursion stack space is typically O(log n) This made it clear that Quick Sort’s efficiency depends not only on the algorithm itself but also on how intelligently the pivot is chosen. #dsa #algorithms #quicksort #sorting #java #learninginpublic
To view or add a comment, sign in
-
🔥 Day 349 – Daily DSA Challenge! 🔥 Problem: 🧱 Pyramid Transition Matrix You are stacking blocks to form a pyramid. Each block is represented by a letter. Given a bottom row and a list of allowed triples "ABC" meaning: A B → C Return true if you can build the pyramid to the top. 💡 Key Insight — Bitmask + DFS Instead of storing allowed transitions as lists, we encode them using bitmasks. For each pair (A, B) we store possible top blocks using a bitmask. Conceptually: Example: ABC ABD mask[A][B] = {C, D} stored as bits. 🧠 Recursive Construction We build the pyramid row by row: 1️⃣ Current row → cur 2️⃣ Generate next row → next 3️⃣ For each adjacent pair (cur[i], cur[i+1]) check all possible blocks above 4️⃣ Recursively continue until: length = 1 → pyramid complete ⚡ Optimization Trick To extract possible blocks from bitmask: bit = m & -m This isolates the lowest set bit, letting us iterate through candidates efficiently. ⚙️ Complexity Let n be bottom length. ✅ Time Complexity: ~ O(7ⁿ) worst-case (pruned heavily) ✅ Space Complexity: O(n) recursion stack But pruning via allowed transitions keeps it practical. 💬 Challenge for you 1️⃣ Why does using bitmasks make transitions faster than lists? 2️⃣ How would you add memoization to avoid recomputing rows? 3️⃣ Can you solve this using DP with states instead of DFS? #DSA #Day349 #LeetCode #DFS #Bitmask #Backtracking #Java #ProblemSolving #KeepCoding
To view or add a comment, sign in
-
-
𝐃𝐚𝐲 𝟑𝟓 – 𝐃𝐒𝐀 𝐉𝐨𝐮𝐫𝐧𝐞𝐲 | 𝐀𝐫𝐫𝐚𝐲𝐬 🚀 Today’s problem focused on reconstructing a Binary Tree using in-order and post-order traversals. 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐒𝐨𝐥𝐯𝐞𝐝 • Construct a Binary Tree from Inorder and Postorder Traversal 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡 • In postorder traversal, the last element represents the root • Used a HashMap to quickly locate the root in the inorder array • Elements on the left side of the root in inorder form the left subtree • Elements on the right side form the right subtree • Recursively built the tree by splitting the inorder range Since postorder processes nodes as Left → Right → Root, we build the right subtree first, then the left subtree. 𝐊𝐞𝐲 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠𝐬 • Traversal patterns can reconstruct tree structures • Inorder helps divide the tree into subtrees • HashMap improves lookup efficiency • Recursive thinking simplifies complex structures 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲 • Time: O(n) • Space: O(n) 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲 Understanding traversal patterns helps not only in visiting nodes but also in rebuilding the entire tree. 35 days consistent 🚀 On to Day 36. #DSA #Arrays #BinaryTree #LeetCode #Java #ProblemSolving #DailyCoding #LearningInPublic #SoftwareDeveloper
To view or add a comment, sign in
-
-
Day 14 – DSA Journey | Arrays 🚀 Today’s problems focused on handling duplicates carefully 🔁 and matrix transformation logic 🔄. 𝐏𝐫𝐨𝐛𝐥𝐞𝐦𝐬 𝐒𝐨𝐥𝐯𝐞𝐝 • 🔢 Permutations II • 🔄 Rotate Image 🔹 Permutations II 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡 • 📊 Sorted the array to group duplicates together • ✅ Used a boolean used[] array to track chosen elements • ⛔ Skipped duplicates using the condition i > 0 && nums[i] == nums[i-1] && !used[i-1] • 🔁 Applied backtracking to build unique permutations 𝐊𝐞𝐲 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠𝐬 • 🧠 Sorting simplifies duplicate handling • 🎯 The duplicate-skipping condition is critical • 🔄 Mark–explore–unmark pattern ensures correctness • 📚 Understanding the recursion tree prevents repeated work 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲 • ⏱ Time: O(n!) • 📦 Space: O(n) 🔹 Rotate Image 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡 • 🔁 Transposed the matrix (swap across the diagonal) • ↔️ Reversed each row to achieve 90° clockwise rotation • 📌 Performed everything in-place 𝐊𝐞𝐲 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠𝐬 • 🧠 Matrix rotation can be broken into simple steps • ⚡ In-place operations reduce extra space • 🔄 Transformations often combine smaller logical operations 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲 • ⏱ Time: O(n²) • 📦 Space: O(1) 🧠 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲 Some problems require careful duplicate control 🔁, Others require visualizing transformations 🔄. Both sharpen problem-solving in different ways. On to Day 15 🚀 #DSA #Backtracking #Matrix #LeetCode #Java #ProblemSolving #DailyCoding #LearningInPublic #SoftwareDeveloper
To view or add a comment, sign in
-
-
Shrinking Data at the Bit Level: Building a Custom Huffman Compression Engine! I recently wanted to look under the hood of how files are actually stored on our hard drives, so I built a lossless File Compressor in Java from scratch. Instead of just simulating compression with strings of text, I engineered this to write raw, physically packed bytes to the disk. By analyzing character frequencies and assigning variable-length binary codes, this engine successfully reduces standard text file sizes by nearly 50%! The Technical Engine (Data Structures & Algorithms): Huffman Coding Algorithm: The core greedy algorithm that dynamically generates an optimal, prefix-free binary dictionary based on exact data frequencies. Priority Queue & Binary Trees: I utilized a PriorityQueue to efficiently extract minimum frequencies and build the Huffman Tree from the bottom up, maintaining an optimal O(NlogN) time complexity. Bitwise Manipulation: The most challenging and rewarding part! I used bit-shifting operations (<<, |) to pack eight '1's and '0's into a single physical Java byte. This ensures the .bin output file legitimately consumes less physical SSD space. Lossless Decompression: Built the exact reverse tree-traversal logic to perfectly reconstruct the original file without losing a single character. It is one thing to learn about Data Structures in theory, but seeing an actual .txt file physically shrink on your local drive is incredibly satisfying. Check out the full source code and my bitwise I/O utility on GitHub: https://lnkd.in/d3GiJUSG #Java #Algorithms #DataCompression #SoftwareEngineering #DataStructures #ComputerScience
To view or add a comment, sign in
-
-
After working through merge-based problems, I spent some time understanding Quick Sort. Unlike Merge Sort, which splits first and merges later, Quick Sort works by placing one element (pivot) in its correct position first, and then recursively sorting the parts around it. The key part of the algorithm is the partition step, where the pivot element is moved to the position where it would appear in the final sorted array. Approach used : - choose a pivot element - count how many elements are smaller than the pivot - place the pivot at its correct index - rearrange elements so that - left side contains values ≤ pivot - right side contains values > pivot - recursively apply the same process to both sides Core quick sort logic : static void quickSort(int[] arr, int lo, int hi) { if (lo >= hi) return; int idx = partition(arr, lo, hi); quickSort(arr, lo, idx - 1); quickSort(arr, idx + 1, hi); } Things that became clear : - the partition step controls the entire algorithm - on average Quick Sort runs in O(n log n) - the worst case can reach O(n²) when the pivot selection is poor - the recursion depth depends on how balanced the partitions are Understanding how the pivot finds its correct position made the whole algorithm much easier to follow. #dsa #algorithms #quicksort #sorting #java #learninginpublic
To view or add a comment, sign in
-
𝐒𝐨𝐥𝐯𝐞𝐝: 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐞 𝐑𝐞𝐯𝐞𝐫𝐬𝐞 𝐏𝐨𝐥𝐢𝐬𝐡 𝐍𝐨𝐭𝐚𝐭𝐢𝐨𝐧 (𝐋𝐞𝐞𝐭𝐂𝐨𝐝𝐞 #𝟏𝟓𝟎) Today I solved the 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐞 𝐑𝐞𝐯𝐞𝐫𝐬𝐞 𝐏𝐨𝐥𝐢𝐬𝐡 𝐍𝐨𝐭𝐚𝐭𝐢𝐨𝐧 problem, a classic stack-based question that strengthens understanding of data structures and expression evaluation. 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐒𝐮𝐦𝐦𝐚𝐫𝐲: Given an array of strings representing an arithmetic expression in Reverse Polish Notation (RPN), evaluate and return the result. ✔ Operators: +, -, *, / ✔ Division truncates toward zero ✔ Valid RPN expression (no division by zero) 𝐊𝐞𝐲 𝐂𝐨𝐧𝐜𝐞𝐩𝐭: Stack Data Structure Why Stack? Because RPN works on the principle of: Push operands onto the stack When an operator appears → pop last two operands → apply operation → push result back This follows LIFO (Last In, First Out) behavior perfectly. 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡: Traverse each token. If it’s a number → push to stack. If it’s an operator: Pop top two elements Perform operation Push result back Final remaining value in stack = answer Time Complexity: O(n) Space Complexity: O(n) Example: Input → ["2","1","+","3","*"] Output → 9 Explanation → ((2 + 1) * 3) #LeetCode #DataStructures #Java #CodingInterview #ProblemSolving
To view or add a comment, sign in
-
-
🚀 Day 47 Out of #365DaysOfCode - LeetCode Github link: https://lnkd.in/gGUy_MKZ Today I have worked on a classic problem: converting an integer into its corresponding Excel column title. At first glance, it looks like a simple base-26 conversion. But the interesting twist is that Excel columns are 1-based indexed and do not contain a zero character. That means we need to adjust the number before applying the modulus operation to correctly map values to letters. 💡 Key Learnings: Handling custom base conversions Managing edge cases with non-zero indexing Efficient string building using StringBuilder Strengthening problem-solving fundamentals ⏱️ Time Complexity: O(log₍26₎ n) #Java #DataStructures #Algorithms #ProblemSolving #CodingPractice
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development