Day 30/100 — Solved “Maximum Depth of N-ary Tree” . today, A simple problem on the surface, but a great exercise to reinforce recursive thinking and traversal strategies. Approach 1: Depth-First Search (DFS) Used recursion to explore each branch of the tree as deep as possible. For every node, I computed the depth of all its children and took the maximum among them. The final answer for each node becomes 1 + max depth of its children. Base cases handled: If the node is null → depth = 0 If the node has no children → depth = 1 This approach is intuitive and closely follows the definition of depth itself. Approach 2: Breadth-First Search (BFS) Traversed the tree level by level using a queue. Each iteration processes one full level of nodes, and the depth counter increments after finishing each level. This approach is useful when thinking in terms of layers rather than paths. Complexity: Both approaches run in O(n) time since every node is visited once. Space complexity differs based on recursion depth (DFS) vs queue size (BFS). Key takeaway: Understanding both DFS and BFS gives flexibility in tackling tree problems from different perspectives—path-based vs level-based thinking. #Day30 #100DaysOfCode #DSA #LeetCode #Python #CodingJourney #Algorithms #DataStructures #ProblemSolving
Solving Maximum Depth of N-ary Tree with DFS and BFS
More Relevant Posts
-
Unlocking the hidden superpower of Binary Search Trees! 🔓🌲 Kth Smallest Element in a BST - LeetCode 230 - Medium (Blind 75) If you want to find the "kth smallest" number in a normal array or a standard tree, you usually have to gather all the numbers and sort them first. But a Binary Search Tree (BST) has a built-in superpower: it already stores data in a naturally sorted way! (The Inorder Secret): The secret to unlocking a BST is a specific way of walking through it called "Inorder Traversal" (Left child -> Current Node -> Right child). If you read a BST this way, you visit the nodes in perfectly ascending order. So, instead of collecting all the nodes, I just started counting them as I walked: 1. Go as far left as possible (to the absolute smallest number). 2. Start walking back up, visiting nodes and increasing a `count`. 3. The exact moment `count == k`, we have found our target! Save it to `result` and stop. Key Learnings: 1) Inorder = Sorted: Always remember that an Inorder Traversal of a valid BST will always process the nodes in strictly increasing order. This is a massive cheat code for BST problems. 2) Early Exit: We don't need to traverse the entire tree. By tracking the count, we can stop the moment we hit our target, saving valuable processing time. 3) State Management: Using class variables (`self.count` and `self.result`) is a clean way to maintain state across multiple recursive calls without having to pass them down as arguments every single time. Time and Space Complexity: Time Complexity: O(H + k) — Where H is the height of the tree. We take O(H) time to reach the leftmost leaf, and then O(k) time to find the kth element. Space Complexity: O(H) — The call stack memory will at most hold H nodes (the height of the tree). Shifting from standard Binary Trees to Binary Search Trees makes you appreciate how data structure design naturally solves problems for you. What is your favorite BST property? 👇 #LeetCode #BinarySearchTree #Blind75 #DataStructures #Python #Recursion #Algorithms #TechInterviews #SoftwareEngineering #MCAFreshers #CodingJourney
To view or add a comment, sign in
-
-
Day 22/100 – DSA Journey Problem: Find Mode in Binary Search Tree (BST) Today’s problem focused on understanding how Binary Search Trees (BST) behave and how we can efficiently extract useful insights from them. Understanding the BST A Binary Search Tree follows a structured property: Left subtree → values ≤ root Right subtree → values ≥ root Because of this, when we perform an Inorder Traversal (Left → Root → Right), the values are visited in sorted order. Why Inorder Traversal? Since duplicates appear consecutively in a sorted sequence, inorder traversal allows us to: Track frequency without using extra space Compare current value with previous value Efficiently determine the most frequent element (mode) Approach Used Traverse the BST using inorder traversal Maintain: Previous value Current count Maximum frequency Update result list whenever a new maximum frequency is found Key Learning This problem highlights how leveraging tree properties can help optimize solutions. Instead of using extra space (like hashmaps), we used traversal behavior to achieve an efficient solution. Conclusion Understanding the underlying structure of data (like BST properties) is often more powerful than brute-force approaches. Smart traversal choices can significantly reduce space complexity and improve performance. #Day22 #100DaysOfCode #DSA #BinarySearchTree #Python #CodingJourney #LeetCode #ProblemSolving
To view or add a comment, sign in
-
-
🚀 Day 22 of Consistency | #75DaysLeetCodeChallenge 🧠 Today’s Problem : Daily Temperatures (#739) 💡 Key Learning: This problem introduces the concept of a monotonic stack, useful for solving “next greater element” type problems efficiently. ⚡ Approach: Use a stack to store indices of temperatures Traverse the array → While current temp > stack top temp → pop & calculate days difference Store result for popped indices Push current index to stack 🧠 Why this works: Monotonic stack keeps decreasing order Helps find next greater element in O(n) Avoids brute force O(n²) 🔥 Result : ✔️ Runtime: 116 ms (Beats 28.07%) 📈 A very important pattern for stack-based optimization problems. A big thanks to Shivam Singh, Nikhil Yadav & Akshat Tiwari for this amazing challenge 🙌 Consistency is compounding. Keep going. 💪 #Day22 #LeetCode #DSA #CodingJourney #100DaysOfCode #Python #Stack #Consistency
To view or add a comment, sign in
-
-
Your dataset has 500 features. Your model only needs 20. The other 480 are noise, redundancy, or both — slowing down training and hurting accuracy. We broke down the 3 algorithms you actually need: Slide 1: PCA — linear, interpretable, fast. Your default. Slide 2: t-SNE — nonlinear, beautiful for visualization, slow on large data Slide 3: UMAP — modern, 10x faster than t-SNE, preserves local + global structure Slide 4: When to use which (decision tree with 4 questions) Slide 5: The common trap: t-SNE axes are NOT features. You can't use them as inputs to a model. Slide 6: Free notebook with all 3 on the same dataset — see the differences yourself Free notebook with side-by-side code for all three: https://lnkd.in/gcbS7m-m If you've been using PCA as a black box, this upgrades you. #MachineLearning #DataScience #PCA #UMAP #DimensionalityReduction #UnsupervisedLearning #Python #Sklearn
To view or add a comment, sign in
-
A 10 million document RAG dataset occupies 31 GB of RAM at float32. turbovec fits it in just 4 GB - and now it searches it faster than FAISS. I just shipped a new release of turbovec: a Rust vector index with Python bindings, built on Google Research's TurboQuant algorithm. Data-oblivious 2-4 bit quantization that matches the Shannon lower bound on distortion - zero training and no rebuilds when the corpus grows. What's in the box: → Hand-written SIMD kernels - 12–20% faster than FAISS FastScan on ARM; match-or-beat on x86. → O(1) stable-id delete and save/load. The corpus is live and mutable, not a static snapshot. → Drop-in integrations for LangChain, LlamaIndex, and Haystack. → Published benchmarks (recall, speed, compression) at d=200/1536/3072 — every number reproducible from the repo. If you're building RAG where memory, latency, or privacy matters, give it a spin. GitHub: https://lnkd.in/e5M4dVRk Paper: https://lnkd.in/eHRmpYms #RAG #VectorSearch #OpenSource #Rust #Python #𝗟𝗟𝗠 #𝗢𝗽𝗲𝗻𝗦𝗼𝘂𝗿𝗰𝗲 #𝗚𝗲𝗺𝗺𝗮4
To view or add a comment, sign in
-
-
🚀 Day 38 of My Problem Solving Journey Today I solved the problem: Valid Parentheses. 🔹 Implemented an approach using string replacement to repeatedly remove valid pairs like "()", "{}", and "[]". 🔹 Learned how to simplify the problem by eliminating balanced brackets step by step. 🔹 Practiced working with string manipulation and loop conditions. 🔹 Understood how this problem can also be solved efficiently using a stack-based approach. 💡 Key takeaway: By continuously removing valid pairs, we can check if the string becomes empty. However, using a stack is a more optimal and scalable solution for this problem. 📌 Example: Input: "()" Output: true Input: "(]" Output: false This problem improved my understanding of stacks, string operations, and pattern matching techniques. On to the next challenge! 💪🔥 #Day38 #CodingJourney #Python #ProblemSolving #DataStructures #Algorithms #LeetCode
To view or add a comment, sign in
-
-
🚀 Stop iterating through rows like it’s 2010. In a recent pipeline, we were processing 5 million records to calculate a rolling score. Using a standard loop took forever and pegged the CPU at 100%. Before optimisation: for i in range(len(df)): df.at[i, 'score'] = df.at[i, 'val'] * 1.05 if df.at[i, 'flag'] else df.at[i, 'val'] After optimisation: import numpy as np df['score'] = np.where(df['flag'], df['val'] * 1.05, df['val']) Performance gain: 85x faster execution. Vectorisation isn’t just a "nice to have"—it’s the difference between a pipeline that crashes at 2 AM and one that finishes in seconds. By letting NumPy handle the heavy lifting in C, we eliminated the Python overhead entirely. If you're still using `.iterrows()` or manual loops for column transformations, it’s time to refactor. The performance delta on large datasets is simply too massive to ignore. What is the biggest "bottleneck" function you’ve refactored recently that gave you a massive speedup? #DataEngineering #Python #PerformanceTuning #Vectorization #DataScience
To view or add a comment, sign in
-
Vector databases are great, but they aren't always the right tool for complex document intelligence. 🧠📉 If you are tired of context fragmentation and untraceable LLM hallucinations, it is time to look at Vectorless RAG with Page Index. By swapping out mathematical embeddings for a reasoning-based, hierarchical document tree, you can achieve upwards of 98% accuracy on complex Q&A tasks with perfect citation traceability. I wrote a complete guide on how this architecture works, including a full Python code implementation. Read it here: https://lnkd.in/gRuXiSxK #ArtificialIntelligence #RAG #PythonDeveloper #MachineLearning #AIEngineering
To view or add a comment, sign in
-
-
Day 56 of #GeekStreak60: The Phantom Pointer Trick! 🕵️♂️📈 Tackled the "Sorted subsequence of size 3" problem on @GeeksforGeeks today. Key Learning: Finding an increasing triplet is easy if you use extra arrays to track minimums and maximums, but that violates the O(1) space constraint. The optimal solution is to use "Greedy State Tracking." By iterating through the array in a single pass, I maintained three variables: the absolute smallest number seen (num1), a valid middle number (num2), and a snapshot of num1 locked in at the exact moment num2 was discovered. If the loop encounters any number strictly greater than num2, the valid triplet is instantly formed! This perfectly eliminates the need for O(n) memory arrays while keeping the time complexity to a strict O(n). Just 4 days left! The logic is feeling sharper than ever. 🚀 #geekstreak60 #npci #coding #Algorithms #Python #DataStructures #Optimization #SoftwareEngineering
To view or add a comment, sign in
-
-
Day 14/100 – Data Structures & Algorithms Today, I worked on the problem “First Unique Character in a String.” Overview The task is to identify the first non-repeating character in a string and return its index. If no such character exists, the result is -1. Approach I used a two-pass strategy: • First pass to store character frequencies using a hashmap • Second pass to identify the first character with a frequency of one Complexity • Time Complexity: O(n) • Space Complexity: O(1) Key Takeaway This problem reinforces how effective hashmaps are for frequency-based problems and how a simple two-pass approach can lead to optimal solutions. Staying consistent and building problem-solving intuition step by step. #Day14 #100DaysOfCode #DSA #Python #LeetCode #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development