🚀 08/04/26 — Stack Foundations: Balancing Parentheses with Precision Today was a productive session where I successfully implemented the Valid Parentheses (LeetCode 20) challenge. This problem is a fundamental exercise in using the Stack data structure to manage nested relationships and maintain order-based logic. 🧱 The Valid Parentheses Logic The goal is to determine if an input string containing (, ), {, }, [ and ] is valid based on whether every open bracket is closed by the correct type and in the correct order. The Stack Strategy: Pushing: As I iterate through the string, whenever I encounter an opening bracket—(, {, or [—I push it onto the stack. Popping and Matching: When I encounter a closing bracket, I first check if the stack is empty. If it is, the string is invalid. Otherwise, I pop the top element from the stack and compare it to the current closing bracket. Validation: If the brackets don't match (e.g., a ) following a {), the function immediately returns false. Final Check: After the loop finishes, the string is only valid if the stack is completely empty, ensuring all open brackets were properly closed. Complexity Metrics: Time Complexity: O(n) where n is the length of the string, as we perform a single linear pass. Space Complexity: O(n) in the worst case where the string contains only opening brackets, requiring them all to be stored in the stack. 📈 Consistency Report Coming off my 50-day streak milestone, today's focus on stacks feels like a solid pivot from the sliding window and array patterns I've been mastering recently. The logic used here is remarkably similar to the structural checks I used in the "String Search" and "Mountain Array" problems earlier this month, where maintaining a specific state across iterations was key. Huge thanks to Anuj Kumar (a.k.a CTO Bhaiya on YouTube) for the continuous inspiration. Every new data structure I master adds another layer to my problem-solving toolkit! My tested O(n) stack-based implementation is attached below! 📄👇 #DSA #Java #Stack #ValidParentheses #DataStructures #Complexity #Consistency #LearningInPublic #CTOBhaiya
Valid Parentheses with Stack Data Structure
More Relevant Posts
-
#100DaysOfCode | Day 2 of my LeetCode challenge. Today’s problem: 1365. How Many Numbers Are Smaller Than the Current Number. While the problem seems simple, it’s a perfect example of how choosing the right data structure can drastically change performance. Here is how I broke it down: 1. The Brute Force Approach The most simple and easy way is to use nested loops to compare every number with every other number. Logic: For each element, loop through the entire array and count smaller values. Time Complexity: O(N^2) Space Complexity: O(N) (to store the result) 2. The Sorting + HashMap Approach A more scalable way is to sort the numbers. In a sorted array, a number's index is equal to the count of numbers smaller than it. Logic: Clone the array, sort it, and store the first occurrence of each number in a HashMap. Time Complexity: O(N log N) (due to sorting) Space Complexity: O(N) (to store the map) Use - Works for any range of numbers (including very large or negative ones). 3. The Frequency Array (Counting Sort Logic) Since the problem constraints were small (0 to 100), this is the most optimized solution. Logic: Count the frequency of each number using an array of size 101, then calculate a running prefix sum. Time Complexity: O(N) (Linear time) Space Complexity: O(1) (The frequency array size is constant) #LeetCode #100DaysOfCode #Java #SoftwareEngineering #DataStructures #Algorithms
To view or add a comment, sign in
-
🚀Day 16 of #128DaysOfCode Solved a classic stack-based problem today! 🔍 Problem: Validate whether a string containing brackets "() {} []" is properly balanced. 💡 Approach: Used a Stack (LIFO) to track opening brackets and match them with corresponding closing brackets. - Push opening brackets - On closing bracket → check top of stack - If mismatch or stack is empty → invalid - At the end, stack should be empty This problem highlights how stacks are perfect for handling nested structures and order-based validation 🧠 Key Learnings: ✔ Strengthened understanding of Stack data structure ✔ Learned how to handle edge cases like mismatched and unordered brackets ✔ Improved problem-solving approach for string-based questions ⏱ Complexity: Time → O(n) Space → O(n) Consistency is the key 🔥 On to Day 17 💪 #DSA #Java #LeetCode #Stack #ProblemSolving #CodingJourney #PlacementsPreparation
To view or add a comment, sign in
-
-
𝐃𝐚𝐲 𝟓𝟖 – 𝐃𝐒𝐀 𝐉𝐨𝐮𝐫𝐧𝐞𝐲 | 𝐀𝐫𝐫𝐚𝐲𝐬 🚀 Today’s problem focused on finding two numbers in a sorted array that add up to a target. 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐒𝐨𝐥𝐯𝐞𝐝 • Two Sum II – Input Array Is Sorted 𝐀𝐩𝐩𝐫𝐨𝐚𝐜𝐡 • Used two pointers: • One at the beginning (left) • One at the end (right) • Calculated sum of both elements Logic: • If sum == target → return indices • If sum < target → move left pointer forward • If sum > target → move right pointer backward This works because the array is already sorted. 𝐊𝐞𝐲 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠𝐬 • Sorting enables two-pointer optimization • Two pointers reduce time complexity from O(n²) to O(n) • Direction of movement depends on comparison with target • Index-based problems often become simpler with sorted data 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲 • Time: O(n) • Space: O(1) 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲 When data is sorted, two pointers can turn a complex problem into a simple one. 58 days consistent 🚀 On to Day 59. #DSA #Arrays #TwoPointers #LeetCode #Java #ProblemSolving #DailyCoding #LearningInPublic #SoftwareDeveloper
To view or add a comment, sign in
-
-
🚀 Day 11 — Streams, Partitioned Logs, and Consumer Groups Queues are useful when work needs to be processed asynchronously. But Day 11 helped me understand a different pattern: 👉 when events need to be stored, replayed, and consumed by multiple independent systems. That is where streams and partitioned logs become important. Because in event-driven systems, data is not just passed from one service to another. It becomes a continuous flow that different consumers may process in different ways. Today’s focus was: Streams, Partitioned Logs, and Consumer Groups 📊 What I covered today 📘 📜 Append-only event logs 🧩 Partitioning for scale 🔑 Ordering guarantees and key-based routing 👥 Consumer groups and offset tracking 🔁 Replay and recovery 📉 Lag monitoring and backpressure What stood out to me ✅ Append-only logs make replay, auditing, and recovery much easier ✅ Partitioning improves throughput, but ordering is only guaranteed within a partition ✅ Key-based routing helps keep related events together ✅ Consumer groups allow multiple applications to process the same stream independently ✅ Offset tracking is critical because consumers should resume from where they stopped ✅ Consumer lag is one of the most important signals in stream processing systems I also implemented a small Partitioned Log with Consumer Offsets sample in Python and Java to make the concept more practical. 🛠️ ➡️ Git: https://lnkd.in/dPKzP2B5 That helped me understand a simple but important idea: 📌 Streams are not just queues 📌 Replay is a feature, not a bug 📌 Offset tracking is what makes recovery possible This is one of those topics that becomes much clearer when you build even a small version of it. Producing events is simple, but handling partitions, offsets, replay, and lag is where the real system design thinking starts. System Design is slowly becoming less about moving data from one place to another and more about building data flows that are scalable, replayable, and reliable under load. On to Day 12 📈 #SystemDesign #DistributedSystems #BackendEngineering #SoftwareEngineering #ScalableSystems #Streams #EventStreaming #PartitionedLogs #ConsumerGroups #OffsetTracking #Kafka #ApacheKafka #EventDrivenArchitecture #MessageQueues #AsyncProcessing #DataStreaming #StreamProcessing #Backpressure #ConsumerLag #Microservices #SystemArchitecture #BackendDevelopment #CloudComputing #Java #Python
To view or add a comment, sign in
-
Day 30/100: Making Software Robust with Error Handling & JSON! Today was all about building resilient applications. I moved beyond simple text files and dived into structured data management and exception handling. Key Technical Takeaways: Exception Handling: Mastering try, except, else, and finally blocks to prevent the app from crashing when unexpected errors occur. JSON Data Management: Transitioning from .txt to .json for a more structured, nested data format. Learned how to write, update, and read JSON using the json library. Search Functionality: Added a "Search" feature to the Password Manager, allowing the app to find and display stored credentials with a single click. User Experience: Handling cases where a user searches for a website that doesn't exist in the database yet. Handling errors and structured data is what separates a "script" from a "professional application." Feeling more confident in building production-ready code! Check out my upgraded Password Manager here: https://lnkd.in/ghRt6Gtk #Python #JSON #ErrorHandling #SoftwareEngineering #100DaysOfCode #VSCode #CleanCode
To view or add a comment, sign in
-
Day 80/100 | #100DaysOfDSA 🧩🚀 Today’s problem: Convert Sorted Array to Binary Search Tree A classic divide-and-conquer problem that builds intuition for balanced trees. Problem idea: Convert a sorted array into a height-balanced BST. Key idea: Recursion + choosing the middle element as root. Why? • The array is already sorted • Picking the middle ensures balance • Left half forms left subtree, right half forms right subtree How it works: • Find the middle index of the array • Create a node with that value • Recursively build left subtree using left half • Recursively build right subtree using right half Time Complexity: O(n) Space Complexity: O(log n) (recursion stack) Big takeaway: Whenever you need a balanced BST from sorted data, think divide & conquer with mid as root. 🔥 This pattern is widely used in tree construction problems. Day 80 done. 🚀 #100DaysOfCode #LeetCode #DSA #Algorithms #BinarySearchTree #DivideAndConquer #Recursion #Java #CodingJourney #ProblemSolving #InterviewPrep #TechCommunity
To view or add a comment, sign in
-
-
Solved a LeetCode hard in O(n). Couldn't use it in production. Last week I optimized an API endpoint that was timing out under load. The problem: filtering 50,000 order records to find duplicates based on multiple fields (customer_id, amount, timestamp within 5 min window). My first instinct? HashMap for O(n) lookup. Classic LeetCode muscle memory. Wrote it, tested locally flew through 100K records in 200ms. Pushed to staging. Staging worked. Production didn't. Here's what LeetCode doesn't teach you: garbage collection pauses are real. That HashMap was getting rebuilt on every request. With 50 requests/second, the young generation GC was running constantly. P99 latency spiked to 3 seconds because of GC pauses, even though P50 stayed at 200ms. Sure, I could've increased heap size. But that just delays full GC, making it worse when it finally hits. The fix: moved duplicate detection to PostgreSQL using a composite index and window function. Slightly slower average (350ms), but consistent. No GC spikes, predictable P99. LeetCode optimizes for algorithmic efficiency. Production optimizes for predictable latency under sustained load. What's an optimization that looked perfect on paper but failed under real traffic? #Java #Database #SystemDesign #LeetCode
To view or add a comment, sign in
-
🚀 Solved: Vertical Order Traversal of a Binary Tree (Hard) Just wrapped up solving one of the trickier tree problems — and it was a great reminder that details matter in Data Structures & Algorithms. 🔍 Key Challenge: Not just grouping nodes by vertical columns, but also: Maintaining row-wise ordering Handling same row & same column cases Ensuring sorted output using a min-heap (PriorityQueue) 💡 Core Insight: Instead of a simple BFS, the correct approach required: 👉 Column → Row → MinHeap mapping This ensures: Columns are processed left → right Rows are processed top → bottom Values are sorted when positions overlap ⚙️ Tech Used: BFS traversal TreeMap (for sorted columns & rows) PriorityQueue (for value ordering) 📊 Result: ✔️ All test cases passed ⚡ Runtime: 4 ms 📉 Memory optimized 🧠 Big Learning: Sometimes a problem looks like a simple traversal… …but hidden constraints turn it into a multi-level sorting problem. #DSA #Java #LeetCode #BinaryTree #CodingInterview #ProblemSolving #SoftwareEngineering #LearningJourney
To view or add a comment, sign in
-
-
Day 5 of my Leetcode Journey 🚀 Today’s challenge: The classic Rank Scores problem. For today's solution, I focused on an intuitive approach using Window Functions in SQL. Sometimes the best way to solve a problem is to identify the right built-in function that handles the logical steps right out of the gate! 🧠 My Approach: Select the score column from the original table. Apply a window function to calculate the rank of each score. Use the ORDER BY score DESC clause inside the window function to ensure scores are ranked from highest to lowest. Use DENSE_RANK() to assign the rank, ensuring that ties receive the same ranking number and the next numbers are consecutive integers without any gaps. ⚡ Key Learnings & SQL Gotchas: RANK() vs DENSE_RANK(): I learned the crucial difference between these two functions. RANK() leaves gaps in the sequence after a tie (e.g., 1, 1, 3), but DENSE_RANK() keeps the integers consecutive (e.g., 1, 1, 2). A huge "aha!" moment for handling database rankings! The OVER() Clause: In SQL, window functions aren't just standalone methods; I had to use the OVER() clause to define exactly how the data should be partitioned and sorted before applying the rank, all without collapsing the rows like a traditional GROUP BY #DSA #DataStructures #Algorithms #ProblemSolving #CodingJourney #Java #TechJourney#DSAJourney #LeetCode #Coding #LearnInPublic #GrowthMindset
To view or add a comment, sign in
-
-
Day 67/100 | #100DaysOfDSA 🧩🚀 Today’s problem: Subsets II Another classic backtracking problem with a twist (duplicates). Problem idea: Generate all possible subsets (power set), but avoid duplicate subsets. Key idea: Backtracking + sorting to handle duplicates. Why? • We need to explore all subset combinations • Duplicates in input can lead to duplicate subsets • Sorting helps us skip repeated elements efficiently How it works: • Sort the array first • At each step, add current subset to result • Iterate through elements • Skip duplicates using condition: 👉 if (i > start && nums[i] == nums[i-1]) continue • Choose → recurse → backtrack Time Complexity: O(2^n) Space Complexity: O(n) recursion depth Big takeaway: Handling duplicates in backtracking requires careful skipping logic, not extra data structures. This pattern appears in many problems (subsets, permutations, combinations). 🔥 Day 67 done. 🚀 #100DaysOfCode #LeetCode #DSA #Algorithms #Backtracking #Recursion #Java #CodingJourney #ProblemSolving #InterviewPrep #TechCommunity
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development