Why Sorting Changes Everything: Two Sum in O(1) Space The classic Two Sum problem typically requires a HashMap for O(n) time and O(n) space. But when the input is already sorted, a completely different approach emerges: two pointers converging from opposite ends. The key insight: if the current sum is too large, the right pointer must move left (smaller values); if too small, the left pointer must move right (larger values). This eliminates the need for any auxiliary data structure. The Real Lesson: Data properties unlock different algorithmic approaches. Sorted data enables two-pointer techniques, eliminating space overhead. This same principle applies across domains — leveraging pre-existing order (timestamps in logs, sorted database indices) can transform O(n) space solutions into O(1) space with the same time complexity. Time: O(n) | Space: O(1) #AlgorithmOptimization #TwoPointers #SortedArrays #SpaceComplexity #Python #CodingInterview #SoftwareEngineering
Optimizing Two Sum with Sorted Arrays in O(1) Space
More Relevant Posts
-
Don't flatten what naturally has structure. It's tempting to model everything in a single class. Easy to write, easy to read, at least until your data grows. This is where most codebases start, with just one model. But with model composition, each model has a single responsibility. And Pydantic handles nested validation automatically. Structure your models the way your domain is actually structured. The code gets cleaner, the errors get clearer, and reuse becomes obvious. This and other real-world modelling patterns are covered in Practical Pydantic: 👉 https://lnkd.in/eGiB7ZxU Model your domain. Not just your data. #Python #Pydantic #Data #Models #Patterns
To view or add a comment, sign in
-
-
Python Data Visualization Quick Guide V1.0 📊 What’s inside: • Distribution plots (Histogram, KDE, Box, Violin) • Categorical analysis (Bar, Count, Pie) • Relationship plots (Scatter, Regression, Bubble) • Time series visualizations (Line, Area) • Multivariate exploration (Heatmaps, Pairplots) • Hierarchical charts (Sunburst, Treemap) • Geographic maps with Plotly • Faceting and subplot layouts • A Visualization Selection Guide to help choose the right chart quickly 🔗 Notebook link: https://lnkd.in/daHNQpdq I’d love to hear your feedback and suggestions for improving it further. #Python #DataScience #DataVisualization #EDA #MachineLearning #Plotly #Seaborn #Matplotlib
To view or add a comment, sign in
-
-
Insert Interval (LeetCode 57) - Medium I explored a more optimized way to handle intervals when the input is already sorted. Instead of re-sorting everything, I learned how to process the intervals in a single linear pass. Key Learnings: * Linear Scan: Since the input is sorted, we can divide the problem into three logical parts: Before overlap, During overlap (merge), and After overlap. * In-place Merging: For the overlapping part, we simply update the start to the min and the end to the max of the conflicting intervals. * Efficiency: No sorting means we save time! This approach is much faster for pre-sorted data. Complexity: ⏱️ Time Complexity: O(N) — because we only iterate through the list once. 📂 Space Complexity: O(N) — to store the result list. Consistency is the key #LeetCode #CodingJourney #Blind75 #SDEPrep #DataStructures #Python #ProblemSolving #TechCommunity
To view or add a comment, sign in
-
-
🚀 Stop iterating through rows like it’s 2010. In a recent pipeline, we were processing 5 million records to calculate a rolling score. Using a standard loop took forever and pegged the CPU at 100%. Before optimisation: for i in range(len(df)): df.at[i, 'score'] = df.at[i, 'val'] * 1.05 if df.at[i, 'flag'] else df.at[i, 'val'] After optimisation: import numpy as np df['score'] = np.where(df['flag'], df['val'] * 1.05, df['val']) Performance gain: 85x faster execution. Vectorisation isn’t just a "nice to have"—it’s the difference between a pipeline that crashes at 2 AM and one that finishes in seconds. By letting NumPy handle the heavy lifting in C, we eliminated the Python overhead entirely. If you're still using `.iterrows()` or manual loops for column transformations, it’s time to refactor. The performance delta on large datasets is simply too massive to ignore. What is the biggest "bottleneck" function you’ve refactored recently that gave you a massive speedup? #DataEngineering #Python #PerformanceTuning #Vectorization #DataScience
To view or add a comment, sign in
-
📅 Day 9/30 — NumPy Indexing & Slicing Continuing my 30-day journey into data science, today I explored how to efficiently access and manipulate data using NumPy arrays. What I worked on today: 🔢 Accessing elements using indexing (including negative indexing) ✂️ Extracting data using array slicing 🔁 Selecting elements using step slicing 🎯 Using index arrays to pick specific elements 🧠 Applying boolean masking to filter data based on conditions It was interesting to see how NumPy provides powerful ways to quickly access, modify, and filter data, which is very useful when working with large datasets. ➡️ Next step: exploring more advanced NumPy operations and applying them to real-world data. #LearningInPublic #Python #DataScience #NumPy #30DaysOfLearning #ProgrammingJourney
To view or add a comment, sign in
-
-
From raw data to meaningful insights! Just wrapped up a hands-on project exploring multiple linear regression—diving into data cleaning, visualization, feature relationships, and building predictive models. It’s always rewarding to see how patterns emerge when the right techniques are applied. Model Performance: • MSE: 8108.57 • MAE: 73.80 • RMSE: 90.05 • R² Score: 0.759 • Adjusted R²: 0.599 Key takeaways: • The power of visualization in understanding data relationships • Importance of feature selection and assumptions in regression • Turning numbers into actionable insights Continuously learning, building, and growing in the data space Dataset: https://lnkd.in/gDNUVVMc #DataScience #MachineLearning #Python #DataAnalysis #LearningJourney
To view or add a comment, sign in
-
Tab 3 is live — and this one gets into the real groundwork of any ML pipeline! 🧹 After exploring the data in Tabs 1 & 2, Tab 3 handles end-to-end Data Preprocessing: • Train / Validation / Test split with a dynamic slider • Stratified splitting with a fallback for small class sizes • One-hot encoding for categorical features • Standard scaling for numerical features • Class balance check — with optional SMOTE for imbalanced datasets Clean data in, better models out. 🚀 More tabs coming soon! #DataScience #MachineLearning #DataPreprocessing #SMOTE #Streamlit #Python #FeatureEngineering #BuildingInPublic #DataAnalytics #OpenToWorkhashtag
To view or add a comment, sign in
-
Binary Search on Rotated Arrays: Adapting Logarithmic Search to Broken Invariants Standard binary search requires sorted data. When an array is rotated (e.g., [4,5,6,7,0,1,2]), the sorted property breaks globally but persists locally — one half is always properly sorted. The adaptation: determine which half is sorted by comparing endpoints, then check if the target falls within that sorted range. This preserves O(log n) complexity despite the rotation disrupting global order. The Design Lesson: When invariants break, look for partial invariants. Here, global sorting is lost but local sorting remains. This "find the preserved property" approach applies broadly — searching in nearly-sorted data, handling corrupted indices with known structure, or working with time-series data with periodic gaps. The algorithm adapts to what guarantees still hold. Time: O(log n) | Space: O(1) #BinarySearch #AdaptiveAlgorithms #RotatedArrays #InvariantPreservation #Python #AlgorithmDesign #SoftwareEngineering
To view or add a comment, sign in
-
-
𝐓𝐨𝐩 𝐒𝐞𝐚𝐛𝐨𝐫𝐧 𝐏𝐥𝐨𝐭𝐬 𝐄𝐯𝐞𝐫𝐲 𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐬𝐭 𝐌𝐮𝐬𝐭 𝐊𝐧𝐨𝐰 𝐢𝐧 𝟐𝟎𝟐𝟔 Data analysts rely heavily on visualizations to understand patterns hidden inside datasets. Python’s Seaborn library simplifies statistical visualization and helps analysts create clear, attractive charts with minimal code. This guide explains the most important Seaborn plots every data analyst should know in 2026. From scatter plots to heatmaps, these visualizations help uncover trends, correlations, and patterns quickly. #DataAnalytics #PythonVisualization #SeabornPlots #DataScience #PythonProgramming #analyticsinsight #analyticsinsightmagazine Read More 👇 https://zurl.co/mvmNa
To view or add a comment, sign in
-
-
🚀 Simplifying Trees in DSA! 🌳💻 While Arrays and Linked Lists are great linear structures, hierarchical data requires a Non-Linear approach—like Trees! To make revising easier, I created this visual cheat sheet. Just like a real-world tree has a Root and Leaves, a Tree data structure starts at the Root Node and branches out to Intermediate and Leaf Nodes. Here is what I have visually summarized in these notes: ✅ The core difference between Linear and Non-Linear structures ✅ 7 Types of Trees (including BST, Strict, Complete, and Skew Trees) ✅ Array Representation vs. Logical View ✅ Tree Traversal logic (Pre-order, In-order, Post-order) complete with Python code! 🐍 Visualizing the flow from the root down to the leaf nodes is a game-changer for understanding algorithms. Take a look and let me know in the comments—what is your favorite data structure to work with? 👇 #DSA #DataStructures #Algorithms #Python #CodingJourney #TechNotes #SoftwareEngineering #LearnInPublic
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development