Efficiency matters 🚀 The Sliding Window Maximum is a classic problem where a brute-force approach often fails. To get it down to linear time, we use a Monotonic Deque. The Logic: 🔹 Keep it fresh: Remove indices from the front once they fall out of the window range. 🔹 Stay Monotonic: Before adding a new value, pop smaller values from the back. They will never be the maximum if a newer, larger value is present. 🔹 Peak Performance: The maximum for the current window is always sitting at the front of your deque. The Result: Every element is processed at most twice (one push, one pop), making the algorithm O(n) and incredibly fast for large-scale data.' Implementation: https://htmlify.me/r/5o63 #Algorithms #Python #CodingTips #DataStructures
Sliding Window Maximum Algorithm Optimized with Monotonic Deque
More Relevant Posts
-
I just tackled the Max Sum in Configuration problem! Instead of using a brute-force O(n^2) approach by rotating the array manually, I used a mathematical observation to solve it in O(n) time with O(1) space. The Approach: 1. Calculate the total sum of all elements and the initial weighted sum (index * value). 2. Observe the pattern: when we rotate the array, the change in the weighted sum follows a specific relation. 3. By deriving the formula Next_Sum = Current_Sum + Total_Sum - (n * last_element), we can calculate the sum of any rotation in constant time. 4. Iterate through all possible rotations and keep track of the maximum value. This optimization significantly improves performance for larger datasets. Implementation: https://htmlify.me/r/dj87 #Algorithm #Python #DataStructures #CodingLife #Optimization
To view or add a comment, sign in
-
-
Ever wonder how to minimize costs when equalizing tower heights? Here is a breakdown of the optimal approach for the Equalize the Towers problem. The Goal: Make all towers the same height by adding or removing blocks. Each tower has a unique cost per unit of change, and we need to find the specific height that results in the lowest total expenditure. The Approach: The total cost function here is convex, meaning if you were to graph the cost against all possible target heights, it would form a clear U-shape. This mathematical property allows us to find the minimum without checking every single height. Ternary Search Strategy: Instead of a linear search, we use Ternary Search. We divide our range of possible heights into three segments. By comparing the costs at two internal midpoints, we can determine which third of the range the minimum cost cannot be in and discard it. Efficiency: While a brute-force search would be slow, Ternary Search cuts the search space down exponentially. This results in a time complexity of O(N * log(MaxHeight)), making it highly efficient for large datasets. Check out the clean Python implementation here: https://htmlify.me/r/h0ks #Algorithms #DataStructures #Python #Coding #GeeksforGeeks #Optimization
To view or add a comment, sign in
-
Ever wondered how machines actually "learn"? 🤖 I just published a new blog post, "Beyond Model Fit: Demystifying Gradient Descent from Scratch," where I break down the core engine of machine learning into simple, actionable concepts. Whether you're a beginner or looking to sharpen your fundamentals, this guide covers everything from the math to the implementation. Check it out here: https://lnkd.in/grreedRw Previous Blog:- Don’t Stop at Model Fitting: The Full Journey of Regression in Data Science https://lnkd.in/d59y_Hga #MachineLearning #DataScience #GradientDescent #Python #LearningFromScratch
To view or add a comment, sign in
-
𝗜 𝗳𝗼𝘂𝗻𝗱 𝗮 𝟰𝟰× 𝘀𝗽𝗲𝗲𝗱 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗶𝗻 𝗮 𝘀𝗶𝗺𝗽𝗹𝗲 𝘀𝘂𝗺() 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻 🚀 I benchmarked three ways of summing 100,000 numbers: • Manual for loop → ~11.4 ms • Built-in 𝘀𝘂𝗺() → ~8.27 ms • 𝗻𝗽.𝘀𝘂𝗺() → ~0.259 ms 𝗡𝘂𝗺𝗣𝘆 𝘄𝗮𝘀 ~𝟰𝟰× 𝗳𝗮𝘀𝘁𝗲𝗿 𝘁𝗵𝗮𝗻 𝗮 𝗣𝘆𝘁𝗵𝗼𝗻 𝗹𝗼𝗼𝗽 ⚡ The real insight isn’t that “NumPy is faster.” It’s about execution layers. A Python loop runs inside the interpreter with dynamic checks every iteration. 𝘀𝘂𝗺() shifts the work into C. 𝗻𝗽.𝘀𝘂𝗺() operates on contiguous memory using optimized low-level code, avoiding Python-level iteration entirely. Same computation. Different execution layer. Massive performance gap. #Python #NumPy #DataScience #LearningInPublic
To view or add a comment, sign in
-
-
"Chatbot: RAG isn’t enough. Hybrid retrieval is" Vector search alone was missing simple queries like “head office”. Added a lightweight hybrid layer -->query expansion -->keyword variants …but applied thresholding only on the primary query to make sure recall doesn’t drop. Semantic search + controlled lexical signals. Snippet: queries = [base] + expand(base) primary = chroma.query(base, n_results=8) fallback = chroma.query(queries[1], n_results=8) Fewer “I don’t know” responses without hallucinating. #LLM #Search #Retrieval #AIEngineering #Python
To view or add a comment, sign in
-
🚀 Day 16/160 – GFG DSA Challenge Today’s focus: Valid Anagram (Optimal Approach) 🔹 Topic: Strings / Hashing 🔹 Approach: Instead of sorting, used a frequency map to achieve linear time complexity. • increment count for characters in the first string • decrement count for characters in the second string • if all frequencies become zero → strings are anagrams 🔹 Complexity: ⏱ Time – O(n) 📦 Space – O(n) 💡 Key Learning: Sorting is intuitive but not always optimal. Using a hash map for frequency counting reduces time complexity from O(n log n) to O(n), which is a big improvement for large inputs. Focusing more on writing efficient logic rather than just making the code work 💪 #DSA #GFG160 #Day16 #Strings #Hashing #Python #ProblemSolving #Consistency
To view or add a comment, sign in
-
-
A Decision Tree is a type of machine learning algorithm and data structure used for classification and regression tasks. It works like a flowchart, where each internal node represents a decision based on a feature, each branch represents the outcome of that decision, and each leaf node represents the final result (class label or value). 🔎 Key Features of a Decision Tree Root Node: The starting point, representing the entire dataset. Decision Nodes: Points where the data is split based on a condition (e.g., "Is age > 30?"). Branches: Possible outcomes of a decision (Yes/No, True/False). Leaf Nodes: Final output (e.g., "Approved" or "Not Approved"). #machinelearning #ml #decisiontree #tree #datascience #supervisedlearning #python #dataanalysis #dataanalytics
To view or add a comment, sign in
-
did you know AMD has a separate code generator just for GEMMs? it's called Tensile, and here's what we've learned digging into it: Tensile is a Python script that writes AMDGCN assembly, benchmarks millions of kernel variants, and builds a lookup table mapping every M×N×K to its optimal kernel. at runtime, rocBLAS traverses a MessagePack catalog to find the best kernel for your exact problem size. the code generation is where it gets most interesting. KernelWriterAssembly dot py translates 20 parameters into raw assembly: tile sizes, unroll depths, prefetch strategies, and most importantly - how to interleave memory ops with MFMA instructions so the matrix units never sit idle. MFMA operates per-wavefront, not per-thread. for a 32×32×8 FP16 op, each of the 64 threads holds 4 elements of A, 4 of B, and 16 of C. the instruction takes 16-32 cycles. you hide that latency by interleaving VALU ops in MFMA's execution shadow. the search space is brutal. v1 benchmarking was brute force across tile sizes, unroll depths, and problem sizes - 23 million kernel launches total. v2 uses fork/join phases to make this tractable. the output is code objects indexed by a hierarchical schema. lazy loading means only kernels you actually call get mapped into memory. 23 million benchmarks distilled into one ds_read → mfma → buffer_load pipeline!
To view or add a comment, sign in
-
-
🚦 Hourly Traffic Volume Prediction using Machine Learning (Code Walkthrough) I recorded a short walkthrough of my Jupyter Notebook where I: - Load and preprocess a real-world traffic dataset (48,120 hourly records from Kaggle). - Perform feature engineering on DateTime (year, month, day, hour, dayofweek) and junction information. - Build and compare four regression models: • Linear Regression (baseline) • Random Forest Regressor • XGBoost Regressor • CatBoost Regressor Key result: - Random Forest achieved the best performance with MAE ≈ 2.4 and R² ≈ 0.97 for hourly traffic prediction at city junctions. #DataScience #MachineLearning #Python #TrafficPrediction #RandomForest #XGBoost #CatBoost #Jupyter #HourlyTrafficPrediction
To view or add a comment, sign in
-
🧠 Insight on Rotated Sorted Arrays — The Power of % While working on a rotated sorted array problem, I learned a simple but powerful trick: You don’t need to rotate the array at all. Instead of physically modifying the array, you can treat it as circular using the modulo operator (%). 💡 The Core Idea When you access an index using: i % n the index automatically wraps around once it reaches the end. So instead of: copying arrays manually rotating elements or handling edge cases separately you simulate circular traversal mathematically. 🔄 Why This Is Powerful If a sorted array is rotated, there will still exist a sequence of n consecutive non-decreasing elements — just starting from a different position. By traversing up to 2n elements using modulo indexing, you effectively: simulate a full circular pass detect sorted continuity avoid extra space 🧑💻 Python Code: https://lnkd.in/gc-eqDpR 🎯 Takeaway Sometimes the smartest solution isn’t changing the data — it’s changing how you navigate it. Understanding circular traversal through % makes rotation-based problems much cleaner and more intuitive. 👉 What’s a small trick you learned that completely changed how you approach a problem? #ProblemSolving #DataStructures #Algorithms #SoftwareEngineering #LearningInPublic Rajan Arora
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development