Day 8: Cracking the Subarray Sum Logic 🧩 💡 How I solved it: Instead of a slow O(n^2) brute-force approach, I used the Prefix Sum + Hash Map technique to achieve a highly efficient O(n) solution. *Tracked the Running Sum: Maintained a current_sum as I traversed the array. *The Difference Trick: For every element, I checked if (current_sum - k) existed in my hash map. If it did, it meant a subarray summing to k had just been completed. *Frequency Mapping: Used a dictionary to store how many times each prefix sum occurred, ensuring every valid subarray was counted. *Handled the Edge Case: Initialized the map with {0: 1} to correctly catch subarrays that sum to k starting right from index 0. 🧠 Key Takeaway: *Space-Time Optimization: By using O(n) space for the hash map, I eliminated the need for nested loops, drastically speeding up the execution. *Mathematical Insight: Reinforced the logic that Sum(i, j) = PrefixSum(j) - PrefixSum(i-1). This is a powerhouse pattern for any range-sum problem. One step closer to mastering Data Structures and Algorithms! 💻🔥 The logic is getting sharper every day! 📈🤝 #100DaysOfCode #DSA #Python #ProblemSolving #StriverA2ZSheet #CodingJourney
Cracking Subarray Sum Logic with Prefix Sum and Hash Map
More Relevant Posts
-
Weekly Challenge 9: TSP With Farthest Insertion. How do you find the shortest route to visit multiple locations without wasting fuel or time. This is known as the Traveling Salesperson Problem (TSP), one of the most famous challenges in computer science and Operations Research. Since finding the "perfect" route by checking every combination takes too long, we use Heuristics to find highly optimized routes in milliseconds. For Week 9 of my Python challenge, I built a spatial heuristic from scratch: > 1️⃣ Generated random nodes (cities) on a 2D plane. > 2️⃣ Calculated their Euclidean distance from the origin. > 3️⃣ Programmed an **Insertion Sort** algorithm to sort the nodes by distance. > 4️⃣ Compared the random route vs. the optimized route. > 📉 The Result: As you can see in the graph below, just by applying this sorting logic, the route distance is drastically reduced (saving over 30% in travel distance in most random scenarios!). Data visualization makes optimization beautiful. Full source code on my GitHub: https://lnkd.in/epZBxUnQ #Python #Optimization #OperationsResearch #DataScience #Matplotlib #Algorithms #CodingChallenge
To view or add a comment, sign in
-
🚀 Solved Another Sliding Window Problem on LeetCode! Today’s problem: Maximum Number of Vowels in a Substring of Given Length (LeetCode #1456) 💡 Problem Summary: Given a string s and an integer k, find the maximum number of vowels in any substring of length k. ❌ Brute Force Approach: Generate all substrings of size k Count vowels in each substring Time Complexity: O(n × k) → not efficient ✅ Optimized Approach: Sliding Window Instead of recalculating everything: Count vowels in the first window Slide the window forward: Add next character Remove previous character Track the maximum count 👉 Core Idea: count = count + new_char - old_char 💻 Clean Code: def maxVowels(s, k): vowels = set("aeiou") count = 0 for i in range(k): if s[i] in vowels: count += 1 max_count = count for i in range(k, len(s)): if s[i] in vowels: count += 1 if s[i - k] in vowels: count -= 1 max_count = max(max_count, count) return max_count ⚡ Complexity: Time: O(n) Space: O(1) 🧠 Key Takeaway: Sliding Window is not just a technique — it’s a mindset. You reuse previous computation instead of recalculating everything. 🔥 This pattern applies to: Strings & Arrays Substring / Subarray problems Real-world streaming data #DSA #LeetCode #Coding #SlidingWindow #InterviewPreparation #Python #ProblemSolving
To view or add a comment, sign in
-
-
🚀 Cracked a Classic Sliding Window Problem on LeetCode! Today I solved Maximum Average Subarray I (LeetCode #643) — a perfect example of how optimizing brute force can drastically improve performance. 💡 Problem Summary: Given an array nums and an integer k, find the maximum average of any subarray of size k. ❌ Brute Force Approach: Generate all subarrays of size k Calculate sum for each Time Complexity: O(n × k) → inefficient for large inputs ✅ Optimized Approach: Sliding Window Instead of recalculating every subarray sum: Compute sum of first k elements Slide the window forward: Add next element Remove previous element Track maximum sum 👉 Core Idea: window_sum = window_sum - old_element + new_element 💻 Clean Code: def findMaxAverage(nums, k): window_sum = sum(nums[:k]) max_sum = window_sum for i in range(k, len(nums)): window_sum += nums[i] window_sum -= nums[i - k] max_sum = max(max_sum, window_sum) return max_sum / k ⚡ Complexity: Time: O(n) Space: O(1) 🧠 Key Takeaway: Sliding Window transforms problems from O(n²) → O(n) by reusing previous computations. 🔥 This pattern is widely used in: Subarray / Substring problems Longest/Shortest window problems Real-time data processing #DataStructures #Algorithms #DSA #LeetCode #CodingInterview #PlacementPreparation #SlidingWindow #Python
To view or add a comment, sign in
-
-
Handling complex tables and math in RAG pipelines is tricky. Standard text splitters often break the table structure, which leads to LLM hallucinations. I was stuck on this exact issue a few days ago. A special thanks to vishal sharma and Alampally Sainath, I was blocked on this, but your suggestions on layout-aware parsing really helped me figure it out and learn a better approach. Based on those discussions, I built a Table-Aware RAG architecture using Docling and Multi-Vector Retrieval to keep the data structure completely intact. I've put together a quick technical breakdown in the carousel below. I'm always looking to improve, so any feedback or alternative approaches are highly welcome! You can find the GitHub link in the first comment! #RAG #GenerativeAI #LLM #LlamaIndex #AIEngineering #Python #BuildInPublic #Docling
To view or add a comment, sign in
-
Day 91 of my #100DaysOfCode journey 🚀 Today I solved Cycle Detection in an Undirected Graph using Breadth First Search (BFS). Problem intuition: In an undirected graph, a cycle exists if during traversal we reach an already visited node that is not the parent of the current node. Approach: • Traverse all graph components • Use a queue for BFS • Store both the current node and its parent in the queue • If a visited neighbor is found that is not the parent → cycle exists Key insight: In undirected graphs, revisiting the parent is normal. But revisiting any other already visited node indicates a cycle. Concepts reinforced: • BFS on graphs • Parent tracking • Connected components • Cycle detection logic Time Complexity: • O(V + E) → where V = vertices, E = edges This is one of the most important graph interview patterns because the same idea extends into: • DFS cycle detection • Detecting cycles in connected/disconnected graphs • Advanced graph traversal problems Slowly building stronger graph intuition one pattern at a time. 🌱 #100DaysOfCode #DSA #Graphs #BFS #CycleDetection #Algorithms #Python #CodingJourney #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 Stop iterating through rows like it’s 2010. In a recent pipeline, we were processing 5 million records to calculate a rolling score. Using a standard loop took forever and pegged the CPU at 100%. Before optimisation: for i in range(len(df)): df.at[i, 'score'] = df.at[i, 'val'] * 1.05 if df.at[i, 'flag'] else df.at[i, 'val'] After optimisation: import numpy as np df['score'] = np.where(df['flag'], df['val'] * 1.05, df['val']) Performance gain: 85x faster execution. Vectorisation isn’t just a "nice to have"—it’s the difference between a pipeline that crashes at 2 AM and one that finishes in seconds. By letting NumPy handle the heavy lifting in C, we eliminated the Python overhead entirely. If you're still using `.iterrows()` or manual loops for column transformations, it’s time to refactor. The performance delta on large datasets is simply too massive to ignore. What is the biggest "bottleneck" function you’ve refactored recently that gave you a massive speedup? #DataEngineering #Python #PerformanceTuning #Vectorization #DataScience
To view or add a comment, sign in
-
Don't flatten what naturally has structure. It's tempting to model everything in a single class. Easy to write, easy to read, at least until your data grows. This is where most codebases start, with just one model. But with model composition, each model has a single responsibility. And Pydantic handles nested validation automatically. Structure your models the way your domain is actually structured. The code gets cleaner, the errors get clearer, and reuse becomes obvious. This and other real-world modelling patterns are covered in Practical Pydantic: 👉 https://lnkd.in/eGiB7ZxU Model your domain. Not just your data. #Python #Pydantic #Data #Models #Patterns
To view or add a comment, sign in
-
-
🚀 DSA Challenge – Day 199 Problem: All Paths From Source to Target 🛣️📍 Today’s problem is a classic DFS + Backtracking question on Directed Acyclic Graphs (DAGs). 🧠 Problem Summary You are given: 👉 A DAG represented as an adjacency list graph 👉 Nodes labeled from 0 → n-1 🎯 Goal: Return all possible paths from: ➡️ Source node 0 ➡️ Target node n-1 💡 Key Insight Since the graph is a DAG: 👉 No cycles → safe to explore all paths using DFS This becomes a path generation problem. ⚙️ Approach 1️⃣ Use DFS (Depth First Search) 2️⃣ Maintain a current path list 3️⃣ Start from node 0 4️⃣ For each node: Add it to the path Explore all neighbors recursively 5️⃣ If you reach node n-1: ✅ Add the current path to result 6️⃣ Backtrack: Remove last node and explore other paths 📌 Why Backtracking? 👉 We reuse the same path list 👉 Undo choices after exploring each branch 📈 Complexity Time Complexity: O(2ⁿ × n) (in worst case) Space Complexity: O(n) (recursion stack) ✨ Why This Problem Is Important This problem builds a strong foundation in: 🔥 DFS traversal 🔥 Backtracking patterns 🔥 Path exploration in graphs 💡 Key Learnings: ➡️ DAG ensures no infinite loops ➡️ Backtracking helps explore all possibilities efficiently ➡️ Path problems = build → explore → undo This pattern appears in: ➡️ All root-to-leaf paths ➡️ Word Ladder variations ➡️ Path sum problems ➡️ Graph traversal with constraints 🔖 #DSA #100DaysOfCode #Day199 #DFS #Backtracking #Graphs #DAG #LeetCode #Algorithms #ProblemSolving #CodingChallenge #InterviewPrep #Python #SoftwareEngineering #DeveloperJourney #TechCommunity #CodingLife
To view or add a comment, sign in
-
-
Every data project starts the same way: "What does this data actually look like?" This free notebook is a complete EDA framework: → Loading and initial inspection (shape, dtypes, head) → Summary statistics for numerical and categorical variables → Missing value analysis (patterns, not just counts) → Univariate analysis — distributions, histograms, value counts → Categorical variable exploration with visualizations → Outlier detection using statistical methods (IQR, z-scores) → Bivariate analysis — correlations, scatter plots, cross-tabs It's not theory. Every section has runnable code on a real dataset. Messy data, not textbook clean. This is the workflow I use on every single project. Free: https://lnkd.in/gecPBR9P Day 3/7. #DataCleaning #EDA #DataAnalyst #Python #Pandas #DataScience #ExploratoryDataAnalysis #FreeResources
To view or add a comment, sign in
-
Day 14/100 – Data Structures & Algorithms Today, I worked on the problem “First Unique Character in a String.” Overview The task is to identify the first non-repeating character in a string and return its index. If no such character exists, the result is -1. Approach I used a two-pass strategy: • First pass to store character frequencies using a hashmap • Second pass to identify the first character with a frequency of one Complexity • Time Complexity: O(n) • Space Complexity: O(1) Key Takeaway This problem reinforces how effective hashmaps are for frequency-based problems and how a simple two-pass approach can lead to optimal solutions. Staying consistent and building problem-solving intuition step by step. #Day14 #100DaysOfCode #DSA #Python #LeetCode #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Great 👍