Day 22 of my #30DayCodeChallenge: The Art of Backtracking! The Problem: Permutations. Given an array of distinct integers, return all possible arrangements. This is a foundational problem for understanding how to explore a "state space." The Logic: approached this using a Recursive Backtracking strategy with a Frequency Map (boolean array) to keep track of used elements. 1. State Exploration: I used a DFS function to build a permutation one element at a time. For every position, we "decide" which available number to place there. 2. The Visited Array: To ensure we don't reuse the same number in a single permutation, I maintained a bool vis array. This acts as our "memory" of what's already in the current path. 3. Backtracking (The "Undo" Step): This is the magic. After exploring a path (e.g., [1, 2, 3]), we pop the last element and mark it as "unvisited" so it can be used in a different position for the next branch (e.g., [1, 3, 2]). The Goal: By the time the recursion depth equals the array length, we've found a valid permutation and add it to our final list. One step closer to mastery. Onward to Day 23! #Java #Algorithms #DataStructures #Backtracking #Recursion #150DaysOfCode
Java Permutations with Backtracking Algorithm
More Relevant Posts
-
🚀 LeetCode Challenge 29/50 💡 Approach: Long Division + HashMap (Cycle Detection) Simulating long division by hand! The key challenge is detecting when the decimal starts repeating — and a HashMap tracking remainder positions is exactly what catches it! 🔍 Key Insight: → Handle sign separately using XOR on numerator & denominator signs → Cast to long to avoid integer overflow edge cases → Compute integer part first, then handle remainder → Store each remainder → its position in result → If remainder is seen again → repeating cycle found — wrap in parentheses! 📈 Complexity: ✅ Time: O(denominator) — remainders cycle within denominator range ✅ Space: O(denominator) — HashMap stores at most denominator-1 remainders Math and algorithms go hand in hand — long division you learned in school just became a HashMap problem! 🧑💻 #LeetCode #DSA #HashMap #Java #ADA #PBL2 #LeetCodeChallenge #Day29of50 #CodingJourney #ComputerEngineering #AlgorithmDesign #FractionToDecimal
To view or add a comment, sign in
-
-
Day 20 of my #30DayCodeChallenge: Efficiency through Pruning! Today's challenge was Word Search II-a complex puzzle that tests how you manage massive search spaces. The Logic: 1. Trie Integration: I stored the dictionary in a Trie to check prefixes in O(1) time. 2. DFS & Backtracking: Explored the grid cell by cell, but with a twist... 3. Intelligent Pruning: If a path doesn't match a Trie prefix, the search stops immediately. This turns an exponential problem into something much more manageable. Coding isn't just about finding the answer; it's about finding it before your timer runs out! #Java #DataStructures #Backtracking #Trie #Algorithms #CleanCode #150DaysOfCode
To view or add a comment, sign in
-
-
Unlocking the hidden superpower of Binary Search Trees! 🔓🌲 Kth Smallest Element in a BST - LeetCode 230 - Medium (Blind 75) If you want to find the "kth smallest" number in a normal array or a standard tree, you usually have to gather all the numbers and sort them first. But a Binary Search Tree (BST) has a built-in superpower: it already stores data in a naturally sorted way! (The Inorder Secret): The secret to unlocking a BST is a specific way of walking through it called "Inorder Traversal" (Left child -> Current Node -> Right child). If you read a BST this way, you visit the nodes in perfectly ascending order. So, instead of collecting all the nodes, I just started counting them as I walked: 1. Go as far left as possible (to the absolute smallest number). 2. Start walking back up, visiting nodes and increasing a `count`. 3. The exact moment `count == k`, we have found our target! Save it to `result` and stop. Key Learnings: 1) Inorder = Sorted: Always remember that an Inorder Traversal of a valid BST will always process the nodes in strictly increasing order. This is a massive cheat code for BST problems. 2) Early Exit: We don't need to traverse the entire tree. By tracking the count, we can stop the moment we hit our target, saving valuable processing time. 3) State Management: Using class variables (`self.count` and `self.result`) is a clean way to maintain state across multiple recursive calls without having to pass them down as arguments every single time. Time and Space Complexity: Time Complexity: O(H + k) — Where H is the height of the tree. We take O(H) time to reach the leftmost leaf, and then O(k) time to find the kth element. Space Complexity: O(H) — The call stack memory will at most hold H nodes (the height of the tree). Shifting from standard Binary Trees to Binary Search Trees makes you appreciate how data structure design naturally solves problems for you. What is your favorite BST property? 👇 #LeetCode #BinarySearchTree #Blind75 #DataStructures #Python #Recursion #Algorithms #TechInterviews #SoftwareEngineering #MCAFreshers #CodingJourney
To view or add a comment, sign in
-
-
🚀 Day 55 of #100DaysOfCode – Rotate Image Today I worked on an interesting matrix problem: Rotate Image (90° clockwise) 🔄 💡 Key Learning: Instead of using extra space, the challenge is to rotate the matrix in-place. 🧠 Approach I used: 1️⃣ Transpose the matrix (convert rows → columns) 2️⃣ Reverse each row This combination effectively rotates the matrix by 90° clockwise without using extra memory. 📌 Example: Input: [[1,2,3], [4,5,6], [7,8,9]] Output: [[7,4,1], [8,5,2], [9,6,3]] ⚡ Complexity: Time: O(n²) Space: O(1) (in-place) 💻 Implemented in Java and successfully passed all test cases ✅ This problem really helped me strengthen my understanding of matrix manipulation and in-place algorithms. #LeetCode #DataStructures #Java #CodingPractice #ProblemSolving #Algorithms #100DaysOfCode
To view or add a comment, sign in
-
-
🚀 Solved LeetCode 215 – Kth Largest Element in an Array Today I worked on a classic and highly important problem that tests understanding of Heap, Sorting, and Quick Select algorithms. 🔍 Problem Insight: Given an unsorted array, the task is to find the kth largest element without fully sorting the array. 💡 Key Learnings: Sorting works but is not the most optimal approach Min Heap (Priority Queue) helps maintain the k largest elements efficiently Quick Select provides an optimal average-case solution Understanding trade-offs between approaches is crucial ⚙️ Approaches Explored: Sorting → Simple but takes O(n log n) Min Heap → Efficient with O(n log k) Quick Select → Best average time complexity O(n) 📈 Complexity: Heap approach: O(n log k) Quick Select: O(n) average, O(n²) worst case 🔥 Takeaway: This problem is a must-know for interviews as it builds a strong foundation in selection algorithms and data structures like heaps. Choosing the right approach based on constraints is the real skill here. #LeetCode #Algorithms #DataStructures #Heap #QuickSelect #CodingJourney #ProblemSolving #Java #CompetitiveProgramming
To view or add a comment, sign in
-
-
Day 26 of my #30DayCodeChallenge: The Power of Local vs. Global Optima! The Problem: Maximum Subarray. Given an integer array, find the contiguous subarray with the largest sum. It sounds simple, but the challenge is handling the trade-offs between including negative numbers or starting fresh. The Logic: This solution utilizes Kadane's Algorithm, which is a masterclass in dynamic programming where we only care about two values at any given moment: 1. Local Maximum (f): At each index, we decide: "Is it better to add the current number to our existing streak, or should I just start a brand new streak from this number?" f = max(current_element, f + current_element 2. Global Maximum (ans): We keep a running record of the best "Local Maximum" we've seen so far. 3. Global Maximum (ans): We keep a running record of the best "Local Maximum" we've seen so far. The "Greedy" Choice: The core intuition is that if a previous subarray sum becomes negative, it will only decrease the sum of any future subarray. In that case, we "greedily" reset our starting point. Efficiency: Time Complexity: O(n) - We only pass through the array once. Space Complexity: O(1) - No extra arrays needed, just two variables. One step closer to mastery. Onward to Day 27! #Java #Algorithms #DataStructures #LeetCode #ProblemSolving #150 OfCode
To view or add a comment, sign in
-
-
Day 19 of my #30DayCodeChallenge: Mastering Backtracking! The Problem: Combination Sum. Finding all unique combinations of candidates that sum up to a specific target, where each number can be used an unlimited number of times. The Logic: This problem is a perfect example of how Recursion and Backtracking allow us to explore multiple paths while pruning the ones that don't lead to a solution. 1. State-Space Exploration: I used a recursive approach to explore every possible combination. By passing the current index forward, we ensure we don't pick the same set of numbers in a different order, keeping the results unique. 2. The "Unlimited" Choice: Unlike some problems where you move to the next element immediately, here we have the option to stay on the same element to achieve the "unlimited times" constraint-as long as the remaining target allows it. 3. Backtracking & Pru.. If the current sum exceeds the target. we ston exploring that branch (Backtrack). This keeps the algorithm efficient even as the depth of the recursion grows. One more step toward algorithmic mastery. Onward to Day 20! #Java #Algorithms #DataStructures #Backtracking #ProblemSolving #150DaysOfCode #SoftwareEngineering #LeetCode
To view or add a comment, sign in
-
-
Day 71/100 | #100DaysOfDSA 🧩🚀 Today’s problem: Maximum Subarray Min-Product A powerful problem combining prefix sums and monotonic stack. Problem idea: For every subarray, calculate (minimum element × sum of subarray) and find the maximum value. Key idea: Monotonic stack + prefix sum optimization. Why? • Brute force would be too slow (O(n²)) • Need to efficiently find range where each element is minimum • Prefix sum helps compute subarray sum in O(1) How it works: • Build prefix sum array • Use monotonic increasing stack • For each element, find its left & right boundary • Treat each element as the minimum of a subarray • Compute subarray sum using prefix • Multiply with element → track maximum Time Complexity: O(n) Space Complexity: O(n) Big takeaway: Monotonic stack helps solve “nearest smaller element” problems efficiently, which unlocks many advanced patterns. 🔥 Day 71 done. 🚀 #100DaysOfCode #LeetCode #DSA #Algorithms #MonotonicStack #PrefixSum #Java #CodingJourney #ProblemSolving #InterviewPrep #TechCommunity
To view or add a comment, sign in
-
-
I added more threads to fix my pipeline. Throughput dropped. That was the moment I realized I had been debugging the wrong thing for days. We were processing ~8 million events in a strict time window. The system was falling behind. My first instinct more threads, more parallelism. Classic move. But the system wasn't compute-bound. Every event was triggering datastore lookups, config reads, network calls. The threads weren't competing for CPU. They were all just waiting and I gave them more company. Python 3.13 introduced a free-threaded build where the GIL can be disabled. True parallel execution across cores. No more serialization. A lot of engineers read that and thought: "Finally, Python is fixed." It's not that simple. Your system's throughput is capped by three things and free-threading only addresses one: → I/O ceiling - how fast your external dependencies respond → Thread overhead ceiling - context switching cost beyond the optimal thread count → Execution ceiling - where the GIL used to apply Removing the GIL lifts the execution ceiling. If your system is sitting behind the I/O ceiling, nothing moves. What actually fixed my pipeline wasn't threads or Python versions. It was pulling config data out of the critical path and caching state locally. One design decision. More impact than any concurrency change. Removing the GIL doesn't remove bad architecture. Full breakdown with the three-ceiling model and where free-threading genuinely helps link in comments. #Python #SystemDesign #BackendEngineering #SoftwareEngineering #DistributedSystems #Concurrency
To view or add a comment, sign in
-
-
Day 24 of my #30DayCodeChallenge: The Art of In-Place Transformation! The Problem: Rotate Image (90 Degrees Clockwise) How do you rotate an n × n matrix without using extra space? The challenge isn't just the rotation- it's doing it in-place, maintaining O(1) extra space complexity. The Logic: Instead of trying to move every element to its final destination in one leap, this problem is best solved by breaking it down into two simple geometric transformations: 1. The Transpose: First, I flipped the matrix over its main diagonal. This turns all rows into columns (and vice versa). Mathematically, we swap matrix [i][j] with matrix [j][i]. 2. The Reflection: Once transposed, the image "sideways." To fix the orientation for a clockwise rotation, I ersed each row. This horizontal flip move columns into their correct 90-dearee positions. One step closer to mastery. Onward to Day 25! #Java #Algorithms #DataStructures #Matrix #ProblemSolving #150DaysOfCode #SoftwareEngineering #LeetCode
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development