🚀 Day 6: Decoding Maximum Consecutive One's 💡 How I solved it: *Maintained a running counter that increments every time I encounter a 1. *Used a global maximum variable to capture the highest streak reached before hitting a 0. *The Reset: Every time a 0 appeared, I reset the current counter to zero to begin tracking the next potential streak. 🧠 Key Takeaway: *Efficiency: Achieved O(n) time complexity and O(1) space—optimal for large datasets. *State Tracking: Learned the importance of maintaining a "local" vs. "global" state. It’s a foundational logic used in many sliding window and greedy algorithm problems. One step closer to mastering Data Structures and Algorithms! 💻🔥 The logic is getting sharper every day! 📈🤝 #100DaysOfCode #DSA #Python #ProblemSolving #StriverA2ZSheet #CodingJourney
Decoding Consecutive Ones with Efficient Algorithm
More Relevant Posts
-
DSA Tip: Graphs If you think data is always stored in a straight line… think again. Real-world systems are connected, not linear. Use Graphs. They represent data as nodes (points) and edges (connections). No strict order. No single path. Just relationships between data. Used in: - Social networks - Maps & navigation - Recommendation systems Insight: The real power of data isn’t in the elements, it’s in how they are connected. Quick Challenge: How would you represent your friend network as a graph? Drop your answer, I’ll review the best ones. FOLLOW FOR MORE DSA TIPS & INSIGHTS #DSA #Graphs #Python #CodingTips #LearnToCode
To view or add a comment, sign in
-
-
🚀 Stop iterating through rows like it’s 2010. In a recent pipeline, we were processing 5 million records to calculate a rolling score. Using a standard loop took forever and pegged the CPU at 100%. Before optimisation: for i in range(len(df)): df.at[i, 'score'] = df.at[i, 'val'] * 1.05 if df.at[i, 'flag'] else df.at[i, 'val'] After optimisation: import numpy as np df['score'] = np.where(df['flag'], df['val'] * 1.05, df['val']) Performance gain: 85x faster execution. Vectorisation isn’t just a "nice to have"—it’s the difference between a pipeline that crashes at 2 AM and one that finishes in seconds. By letting NumPy handle the heavy lifting in C, we eliminated the Python overhead entirely. If you're still using `.iterrows()` or manual loops for column transformations, it’s time to refactor. The performance delta on large datasets is simply too massive to ignore. What is the biggest "bottleneck" function you’ve refactored recently that gave you a massive speedup? #DataEngineering #Python #PerformanceTuning #Vectorization #DataScience
To view or add a comment, sign in
-
🚀 Day 39/60 — LeetCode Discipline Problem Solved: Sqrt(x) Difficulty: Easy Today’s challenge was to compute the square root of a number without using built-in functions. Instead of brute force, I used Binary Search — a classic, elegant approach that narrows down the answer efficiently. 💡 Key Learnings: • Binary Search application beyond arrays • Handling edge cases (x < 2) • Avoiding overflow using conditions carefully • Finding floor value of square root • Optimized thinking over brute force ⚡ Performance: Runtime: 4 ms Like walking in a foggy path, I didn’t see the answer directly… But step by step— cutting the search space in half— the truth revealed itself. That’s the beauty of algorithms. #LeetCode #60DaysOfCode #DSA #BinarySearch #ProblemSolving #CodingJourney #Python #Consistency #TechGrowth
To view or add a comment, sign in
-
-
Topic 4/100 🚀 🧠 Topic 4 — Generators Processing large data and running out of memory? 😵 This concept solves that problem. 👉 What is it? Generators are functions that return values one at a time using yield instead of returning everything at once. 👉 Use Case: Used in real-world applications for: Reading large files Data streaming Handling APIs with pagination 👉 Why it’s Helpful: Saves memory Improves performance Enables lazy evaluation (data generated only when needed) 💻 Example: def count_up_to(n): for i in range(n): yield i for num in count_up_to(5): print(num) 🧠 What’s happening here? Instead of storing all values in memory, the function generates them one by one when needed. ⚡ Pro Tip: If you're working with large datasets, always think “generator” before using lists. 💬 Follow this series for more Topics #Python #BackendDevelopment #100TopicOfCode #SoftwareEngineering #LearnInPublic
To view or add a comment, sign in
-
-
Handling Time Data: Logic over Strings. Today I worked on a common challenge: comparing time values like '7:15' and '10:30' stored in a list.The Problem: Standard string comparison can be unreliable (e.g., '7:15' vs '10:30'), and using float numbers leads to mathematical inaccuracies.The Solution: I converted all time entries into a single unit — Total Minutes from the start of the day (hours * 60 + minutes).This transformation turns time-strings into simple integers, creating a robust and scalable logic for sorting and filtering. A solid foundation is everything, whether it's infrastructure or code. 🛡️🦾#Python #Coding #ProblemSolving #SoftwareEngineering #Backend #Summerson
To view or add a comment, sign in
-
-
Don't flatten what naturally has structure. It's tempting to model everything in a single class. Easy to write, easy to read, at least until your data grows. This is where most codebases start, with just one model. But with model composition, each model has a single responsibility. And Pydantic handles nested validation automatically. Structure your models the way your domain is actually structured. The code gets cleaner, the errors get clearer, and reuse becomes obvious. This and other real-world modelling patterns are covered in Practical Pydantic: 👉 https://lnkd.in/eGiB7ZxU Model your domain. Not just your data. #Python #Pydantic #Data #Models #Patterns
To view or add a comment, sign in
-
-
Day 8: Cracking the Subarray Sum Logic 🧩 💡 How I solved it: Instead of a slow O(n^2) brute-force approach, I used the Prefix Sum + Hash Map technique to achieve a highly efficient O(n) solution. *Tracked the Running Sum: Maintained a current_sum as I traversed the array. *The Difference Trick: For every element, I checked if (current_sum - k) existed in my hash map. If it did, it meant a subarray summing to k had just been completed. *Frequency Mapping: Used a dictionary to store how many times each prefix sum occurred, ensuring every valid subarray was counted. *Handled the Edge Case: Initialized the map with {0: 1} to correctly catch subarrays that sum to k starting right from index 0. 🧠 Key Takeaway: *Space-Time Optimization: By using O(n) space for the hash map, I eliminated the need for nested loops, drastically speeding up the execution. *Mathematical Insight: Reinforced the logic that Sum(i, j) = PrefixSum(j) - PrefixSum(i-1). This is a powerhouse pattern for any range-sum problem. One step closer to mastering Data Structures and Algorithms! 💻🔥 The logic is getting sharper every day! 📈🤝 #100DaysOfCode #DSA #Python #ProblemSolving #StriverA2ZSheet #CodingJourney
To view or add a comment, sign in
-
-
🚀 LeetCode Win: Missing Number (Beats 100%) Solved this problem using a simple mathematical insight instead of overcomplicating the logic ⚡ 🧠 Core Idea: If a list contains numbers from 0 → n, one number is missing. 👉 So instead of searching for it… I compared: Expected sum (0 → n) Actual sum (given array) ⚙️ Approach: Generate range → 0 to n Compute expected sum Subtract actual array sum ✅ Missing Number = Expected Sum - Actual Sum 🔥 Why this approach stands out: ⏱️ O(n) Time Complexity 💾 O(1) Space Complexity ❌ No extra loops ❌ No sorting ✔️ Clean & efficient 📊 Result: ⚡ Runtime: 0 ms (Beats 100%) 📉 Memory: Optimized 💭 Key Learning: The best solutions aren’t always complex. Sometimes, stepping back and thinking mathematically changes everything. 🚀 Consistency check: Showing up, solving daily, improving 1% 💪 #LeetCode #Algorithms #DataStructures #ProblemSolving #Python #CodingJourney #100DaysOfCode #DeveloperLife #WomenWhoCode #TechGrowth #CodeDaily #ProgrammingLife #SoftwareEngineer #ConsistencyWins
To view or add a comment, sign in
-
-
One habit I’ve started building when working with data: Before writing any logic, I always run: df.head() df.info() df.describe() It sounds obvious. But early on, I skipped this step. I would immediately start writing transformations. And later realize things like: columns were strings instead of numbers values had unexpected formats missing data existed where I didn’t expect it Now I try to slow down and understand the data first. It saves a surprising amount of time later. 💡 Data engineering lesson I’m learning: Understanding the data is often more important than writing the code. #DataEngineering #Python #Pandas
To view or add a comment, sign in
-
🚀 Solved Another Sliding Window Problem on LeetCode! Today’s problem: Maximum Number of Vowels in a Substring of Given Length (LeetCode #1456) 💡 Problem Summary: Given a string s and an integer k, find the maximum number of vowels in any substring of length k. ❌ Brute Force Approach: Generate all substrings of size k Count vowels in each substring Time Complexity: O(n × k) → not efficient ✅ Optimized Approach: Sliding Window Instead of recalculating everything: Count vowels in the first window Slide the window forward: Add next character Remove previous character Track the maximum count 👉 Core Idea: count = count + new_char - old_char 💻 Clean Code: def maxVowels(s, k): vowels = set("aeiou") count = 0 for i in range(k): if s[i] in vowels: count += 1 max_count = count for i in range(k, len(s)): if s[i] in vowels: count += 1 if s[i - k] in vowels: count -= 1 max_count = max(max_count, count) return max_count ⚡ Complexity: Time: O(n) Space: O(1) 🧠 Key Takeaway: Sliding Window is not just a technique — it’s a mindset. You reuse previous computation instead of recalculating everything. 🔥 This pattern applies to: Strings & Arrays Substring / Subarray problems Real-world streaming data #DSA #LeetCode #Coding #SlidingWindow #InterviewPreparation #Python #ProblemSolving
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development