🚀 DSA Challenge – Day 95 Problem: Design a Stack Supporting Constant-Time Minimum Retrieval 🧩📉 This is one of those elegant data structure design problems that sharpen your understanding of stack operations and auxiliary data structures! 🧠 Problem Summary: Implement a stack that, in addition to normal stack operations, can return the minimum element in O(1) time. The stack should support: ✅ push(val) → Insert an element. ✅ pop() → Remove the top element. ✅ top() → Return the top element. ✅ getMin() → Retrieve the minimum element — in constant time! ⚙️ My Approach: I used two stacks to achieve O(1) time complexity for all operations: 1️⃣ Main Stack (stack) → Stores all elements. 2️⃣ Min Stack (minStack) → Tracks the current minimum after each insertion. 🔹 When pushing a value, compare it with the current min and push it onto minStack if it’s smaller or equal. 🔹 When popping, if the popped element equals the current minimum, remove it from minStack as well. This ensures minStack always holds the minimum element at its top. 📈 Complexity: Time: O(1) for all operations Space: O(n) for the two stacks ✨ Key Takeaway: This problem beautifully demonstrates how an auxiliary data structure can optimize operations without affecting time complexity. Sometimes, solving problems efficiently is all about keeping track of the right information. ⚙️📚 🔖 #DSA #100DaysOfCode #LeetCode #ProblemSolving #MinStack #DataStructures #Stack #Algorithms #Python #CodingChallenge #EfficientCode #InterviewPrep #TechCommunity #LearningEveryday
Designing a Stack with Constant-Time Min Retrieval
More Relevant Posts
-
🚀 DSA Progress – Day 114 ✅ Problem #228: Summary Ranges 🧠 Difficulty: Easy | Topics: Array, Two-Pointers, Iteration 🔍 Approach: Today I worked on summarizing consecutive sequences in a sorted array. Step 1 (Scan): Iterate through the list from left to right. Step 2 (Mark Start): Treat the current number as the start of a potential range. Step 3 (Expand Range): Continue while the next number is exactly +1 from the current number. Step 4 (Record Result): If the start and end differ → push "start->end" If only one number → push "start" This cleanly compresses consecutive sequences into readable ranges. 🕒 Time & Space Complexity: Time: O(n) — single pass through the array Space: O(1) extra — only the output list uses additional space 📁 File: https://lnkd.in/gaHdwyHc 📚 Repo: https://lnkd.in/g8Cn-EwH 💡 Learned: This problem reinforced how to identify and compress consecutive patterns efficiently. The idea of scanning once while controlling a moving pointer is a core technique used in data grouping, run-length encoding, and sequence compression problems. A simple yet elegant problem that improves intuition for range detection in numeric data! 🔢➡️📦 ✅ Day 114 complete — compressed life’s sequences into neat little summaries! 😄📦✨ #LeetCode #DSA #Python #TwoPointers #Arrays #SummaryRanges #DailyCoding #InterviewPrep #GitHubJourney
To view or add a comment, sign in
-
🚀 Days 41–56 of #100DaysOfCoding: Strengthening Problem-Solving with Arrays & NumPy Over the past 16 days, I’ve focused on building a solid foundation in data structures and numerical computing. This phase has been about improving efficiency, mastering algorithmic thinking, and understanding data manipulation at a deeper level. 🔹 Core Array Problems Solved: 1️⃣ Find Min/Max Elements – Implemented an O(n) linear-time approach without sorting, optimizing both time and space 2️⃣ In-Place Array Reversal – Applied the two-pointer technique to reverse arrays efficiently (O(1) space) 3️⃣ Element Frequency Counter – Designed a function to compute occurrences of target elements in linear time 4️⃣ Second Largest Element – Solved using two tracking variables in a single traversal 5️⃣ Move Zeros to End – Implemented a stable version maintaining element order; currently refining with a two-pointer optimization 🔹 NumPy Fundamentals: Explored essential NumPy operations for data analysis, including: Mean, Median, Standard Deviation, Variance Array slicing, broadcasting, and vectorized computations These are fundamental tools for upcoming machine learning and data science projects. 🔹 Key Learnings: ✅ Optimization in both time and space complexity is critical ✅ In-place algorithms significantly enhance memory efficiency ✅ Clean, simple solutions often outperform over-engineered ones Next steps: optimizing current implementations and diving deeper into advanced data structures and algorithms. GitHub Repository: https://lnkd.in/gsucUW-F What’s your favorite array-related problem or concept I’d love to hear your thoughts in the comments 👇 #Python #NumPy #DataStructures #Algorithms #ProblemSolving #CodingChallenge #LearningInPublic #TechJourney #100DaysOfCode #yohancodes #selflearnig
To view or add a comment, sign in
-
🚀Day 68 of #120_Days_LeetCode Challenge !! 🌳 Problem-Solving with Binary Trees: Constructing from Inorder & Postorder Traversals 🌳 Today, I worked on an interesting tree reconstruction problem where the goal was to build a binary tree from its inorder and postorder traversals. 🧩 Problem Statement: Given two integer arrays — inorder: the inorder traversal of a binary tree postorder: the postorder traversal of the same tree Reconstruct and return the binary tree. 💡 Key Insight: In postorder traversal, the last element is always the root of the tree. Using a hash map for quick index lookups in the inorder sequence helps efficiently locate the root position and divide the tree into left and right subtrees. ⚙️ Approach: Build a hash map to store inorder indices for O(1) lookups. Start from the end of the postorder array (root node). Recursively construct the right subtree first, then the left subtree, since postorder processes left → right → root. 🧠 Time Complexity: O(N) 📦 Space Complexity: O(N) (for recursion + hash map) 🔗 This problem sharpened my understanding of tree traversal relationships and recursion-based construction logic — a vital concept for many tree-based algorithms in interviews and real-world parsing systems. #Coding #DSA #BinaryTree #ProblemSolving #LeetCode #TechLearning #Algorithms #ProgrammingJourney
To view or add a comment, sign in
-
-
🔹 Day 65 of #100DaysOfLeetCodeChallenge 🔹 Problem: Generate Parentheses Focus: Recursion + Backtracking 💡 The Challenge: Generate all valid combinations of n pairs of parentheses. Sounds simple? The trick is ensuring every string remains valid throughout construction! 🧠 My Approach: Used backtracking to build strings intelligently: Add '(' when we haven't used all n opening brackets Add ')' only when it won't break validity (close < open) Base case: Both counts reach n → valid combination found! ✅ 📊 Complexity Analysis: ⏳ Time: O(2ⁿ) — exploring possible combinations 💾 Space: O(n) — recursion stack depth 📌 Example: Input: n = 3 Output: ["((()))","(()())","(())()","()(())","()()()"] 🎯 Key Takeaway: Backtracking shines when you need to explore all possibilities while intelligently pruning invalid paths. This problem perfectly illustrates the power of constraint-based recursion! What's your favorite backtracking problem? Drop it in the comments! 👇 Day 65/100 complete. Onwards to mastering DSA, one problem at a time! 💪 #LeetCode #Algorithms #DataStructures #BacktrackingAlgorithms #TechCareers #SoftwareEngineering #CodingJourney #LearnInPublic
To view or add a comment, sign in
-
-
Day 50: Binary Search Tree Iterator (BST) 🌲 I'VE HIT THE HALFWAY MARK! Day 50 of #100DaysOfCode is dedicated to mastering the "Binary Search Tree Iterator." This challenge requires designing a class that enables in-order traversal of a BST using constant time complexity for the key operations. The key insight is using an Iterative Inorder Traversal with a Stack: __init__ (Initialization): The constructor doesn't traverse the whole tree. It uses a helper function (_push_left) to initially push the root and all its left descendants onto the stack. This positions the stack to start at the smallest element. next(): This method retrieves the next smallest element. It simply pops the top element from the stack, and then immediately checks if that popped node has a right child. If it does, it calls _push_left on the right child to load the next set of smallest elements onto the stack. Efficiency: The design ensures that while an element is pushed and popped exactly once overall (O(n) total time), the amortized time complexity for both next() and hasNext() is O(1). This was a perfect problem to mark the halfway point—combining Data Structures and Algorithmic Design! #Python #DSA #Algorithms #BST #Iterator #Stack #100DaysOfCode
To view or add a comment, sign in
-
-
If you live in Jupyter Notebook or Google Colab but your workflow still feels slower than it should, you are not alone. Many professionals overlook the notebook’s built-in magic commands. These are not toys; they are native tools that speed up profiling, execution, and inspection so your code runs faster and reads cleaner. We compiled a concise guide to 8 magic commands that remove friction and make your notebook work look professional. Built for Python users in data science, analytics, and machine learning who want faster runs, clearer diagnostics, and fewer clicks. - Time and profile: %time, %%time, and %prun to measure and compare performance precisely. - Run and discover: %run to execute .py or .ipynb files, and %lsmagic to see what is available. - Inspect and manage: %history, %whos for variables in memory, %bash for shell tasks, plus A and B to add cells without touching the mouse. Think you already know the basics? This goes beyond shortcuts and into evidence-based performance tuning. Worried about complexity? Start with %lsmagic and %history; they are safe, notebook-native, and easy to remember. We see many teams ignore these until they compare two snippets with %time and immediately find the faster approach. Our walkthrough focuses on practical, ready-to-use tips so you can apply each command in your next session, not just read about it. 🚀 Read the full article and upskill today: https://lnkd.in/g-pZ9GDa #borntoDev #Python #Jupyter #DataScience #DeveloperTools #Productivity
To view or add a comment, sign in
-
-
🚀 DSA Challenge – Day 93 Problem: Find Median from Data Stream 📊⚙️ This was an exciting deep dive into data structures and real-time computation — maintaining the median efficiently while continuously adding numbers from a stream. 🧠 Problem Summary: You need to design a class MedianFinder that can: ✅ Add numbers dynamically from a data stream. ✅ Return the median at any point in time. If the total number of elements is even → median = mean of the two middle values. If odd → median = middle value. ⚙️ My Approach: 1️⃣ Use two heaps to maintain balance: A max heap (maxHeap) for the smaller half of numbers. A min heap (minHeap) for the larger half. 2️⃣ Whenever a new number arrives: Push it into the maxHeap (inverted to simulate max behavior). Balance both heaps so that their size difference is never more than 1. 3️⃣ Ensure heap order: the maximum in maxHeap ≤ minimum in minHeap. 4️⃣ The median is: The top of the larger heap (if odd count). The average of both tops (if even count). 📈 Complexity: Time: O(log n) → For insertion and heap balancing. Space: O(n) → To store all elements in heaps. ✨ Key Takeaway: This problem highlights how heaps can turn complex real-time median calculations into a smooth, efficient process — a great example of data structure synergy in action. ⚡ 🔖 #DSA #100DaysOfCode #LeetCode #ProblemSolving #Heaps #PriorityQueue #DataStructures #Algorithms #Median #Python #CodingChallenge #InterviewPrep #EfficientCode #DynamicProgramming #TechCommunity #LearningByBuilding #CodeEveryday
To view or add a comment, sign in
-
-
💻 Mastering Linked Lists in Just 8 Hours — My DSA Milestone Last week, I decided to challenge myself — to master Singly Linked List (SLL) in one go. Many people find it tricky because of pointers, but once you visualize the flow, it actually becomes surprisingly simple. In just 8 hours, I built, broke, and rebuilt linked lists from scratch — 👉 created nodes, 👉 inserted and deleted dynamically, 👉 reversed the list, 👉 detected loops, and 👉 even played with the slow-fast pointer technique to find the middle node efficiently. The moment you understand that: > “Fast moves 2 steps, slow moves 1 step, and when fast stops — slow is your middle.” everything clicks ✨ Complexity simplified: Traversal: O(n) Reversal: O(n) Space: O(1) All in pure pointer logic — no extra data structures, no shortcuts. Now, Singly Linked List ✅ Next target: Trees & Recursion 🌳 Because DSA is not about memorizing algorithms — it’s about understanding flow. --- 💬 If you’re starting your DSA journey — begin with Linked Lists. It teaches you how data moves, how memory connects, and how logic flows. #DSA #Python #CodingJourney #LinkedList #SoftwareEngineering #100DaysOfCode
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development