Maximum distance between a pair of values Given nums1 and nums2 🎯Target: Max distance (j - i) such that i <= j and nums1[i] <= nums2[j] Thought process... We'll iterate over nums1 and for each index i, we need to find a valid j in nums2. At first, can think of linear search… but wait 🤔 Both arrays are sorted in non-increasing order, so we can do better Instead of scanning, we use binary search on nums2 to find the rightmost index j such that: nums2[j] >= nums1[i] j found, then compute j - i and update our answer. And yeah… that’s pretty much it. Clean and efficient😄 #potd #DSA #Algorithms #BinarySearch #problemSolving
Max Distance Between Two Sorted Arrays Using Binary Search
More Relevant Posts
-
Day 106 ✅ Solved: 3488. Closest Equal Element Queries (Medium) Today’s problem was a great mix of hashing + binary search + circular array logic. 💡 Key Idea: Store indices of each number using a hashmap For each query, find the closest index with the same value Use binary search to efficiently locate neighbors Handle circular nature carefully (wrap-around distance) ⚡ What I Learned: Preprocessing can drastically reduce query time Binary search isn’t just for sorted arrays—it’s powerful with index mapping Always consider edge cases like circular distance 📊 Performance: Runtime: 200 ms Beat: 74% Every day, I’m getting better at recognizing patterns and optimizing solutions. Consistency is paying off. 🔁 On to Day 107! #LeetCode #DataStructures #Algorithms #CodingJourney #Consistency #Learning #ProblemSolving
To view or add a comment, sign in
-
-
🚀 Today’s DSA Challenge 12 Binary Search. Today I learned Binary Search in a clear and practical way. In this algorithm, we always check the middle element because it helps us divide the array into two halves at every step, making the search much faster. I also understood why we use a while loop instead of a for loop because we don’t know the number of iterations in advance, the loop runs until left <= right. The approach is simple: initialize left = 0 and right = n - 1, then calculate the middle index. If the target is found at the middle, return the index. If the target is smaller, search in the left half; otherwise, search in the right half. If the element is not found, return -1. In terms of complexity, the best case is O(1) when the element is found at the middle initially, and the worst case is O(log n) since the search space is reduced by half in every step. The space complexity is O(1) as no extra memory is used. Key takeaway, Binary Search works only on sorted arrays. #DSA #BinarySearch #LearningInPublic #CodingJourney #Algorithms
To view or add a comment, sign in
-
-
🚀 Day 40/150 of hashtag#150DaysOfDSA 📌 Task: Find Target Indices After Sorting Array You are given a 0-indexed integer array nums and a target element target. A target index is an index i such that nums[i] == target. Return a list of the target indices of nums after sorting nums in non-decreasing order. If there are no target indices, return an empty list. The returned list must be sorted in increasing order. #150DaysOfCode #DSA #CPP #Algorithms #CodingJourney #ProblemSolving
To view or add a comment, sign in
-
-
🚀 Day 22 of DSA Practice Today’s problem was about finding the minimum distance between mirror pairs in an array. 🔍 Problem Summary: Given an array of integers, a pair (i, j) is called a mirror pair if reversing the digits of nums[i] gives nums[j]. The goal is to find the minimum absolute distance between such pairs. If no pair exists, return -1. 💡 Key Insight: Instead of checking all pairs (which would be slow), we can: Reverse each number efficiently Use a hashmap to store previously seen numbers and their indices Check if the reversed value already exists and update the minimum distance ⚡ Complexity: Time: O(n) Space: O(n) 📌 Takeaway: Using hashing helps reduce brute-force comparisons and makes the solution efficient even for large inputs. #Day22 #DSA #CodingPractice #ProblemSolving #Algorithms #InterviewPrep
To view or add a comment, sign in
-
-
The end of 2025 is the point that the human monopoly on reason was lost. The same wave of models that made coding agents suddenly work also made autonomous math research possible. In the last few months, LLMs have fully solved more than 50 Erdős problems. The progress is exponential; each new rung on the intelligence ladder unlocks deeper layers of capability. Data: From Terence Tao's wiki. https://lnkd.in/dwG6fMpq
To view or add a comment, sign in
-
-
I spent days going deep on TurboQuant. Not the headlines. The actual paper. Here is what I have grasped more. The real problem is not "KV cache is big." It is two specific failures every prior method had: 1. The metadata tax Quantize to 3 bits, but store min/max per block in float16. That adds ~0.25 bits back per value. Your "3-bit" compression is actually 3.25 bits. KIVI, KVQuant — all of them paid this tax. 2. The softmax bias MSE-optimal compression shrinks vectors toward zero. At 1-bit, dot products come out at 63.7% of true value. This does NOT cancel in softmax — because of the exponential, attention flattens across tokens instead of focusing sharply. This is why KIVI misses ~2% of needles. TurboQuant solves both with three moves: → Random rotation — maps every KV vector to the same Gaussian distribution, regardless of input. Distribution is known analytically. Zero metadata needed. → Lloyd-Max codebook — optimal snap points precomputed once for that distribution. No k-means. No calibration data. Ever. → QJL on residual — 1 bit on the leftover error, proven to make dot products unbiased. E[⟨y,x̃⟩] = ⟨y,x⟩ exactly. Total: b bits. Zero overhead. Provably unbiased attention. What makes it genuinely different: it comes with a proof. Shannon's law sets a hard distortion floor: 4^(-b). No algorithm beats it. TurboQuant sits within 2.7× of that floor — with a mathematical guarantee. Prior methods have no such bound. They could be 50× above the limit and you would not know. The problem of KV cache compression is, for practical purposes, solved. 6× memory reduction. 8× faster attention on H100. 0.997 recall — identical to full precision. No retraining required. I implemented this from scratch in Python and verified every bound. The math holds. Papers: QJL (AAAI 2025) → PolarQuant (AISTATS 2026) → TurboQuant (ICLR 2026, arXiv 2504.19874) #TurboQuant #LLM #MLEngineering #AIInfrastructure #MachineLearning #AI
To view or add a comment, sign in
-
🚀 Day 11 of #DSA 🔹 Problem Solved: Binary Search 🔹 Approach: Applied binary search on a sorted array to efficiently find the target element 📌 Key Learning: Learned how dividing the search space in half reduces time complexity from O(n) to O(log n), making the solution much faster for large inputs. Stepping into optimized problem-solving 🚀 #DataStructures #Algorithms #BinarySearch #CodingJourney #ProblemSolving
To view or add a comment, sign in
-
-
GPT-5.5 is out. A few notable points from the launch: • 82.7% on Terminal-Bench 2.0 • 84.9% on GDPval • 78.7% on OSWorld-Verified • Gains in coding, knowledge work, and scientific research • GPT-5.4 latency, but higher capability • Fewer tokens used on comparable tasks 👉 https://lnkd.in/dctXt7WG
To view or add a comment, sign in
-
-
Sorting is a fundamental operation in computer science that significantly influences the efficiency of various algorithms and applications. Among the myriad sorting techniques, Bubble Sort and Quick Sort are two of the most commonly studied algorithms. Bubble Sort, while easy to understand and implement, becomes inefficient with larger datasets due to its quadratic time complexity. In contrast, Quick Sort employs a more sophisticated divide-and-conquer strategy that allows it to sort elements efficiently, with an average-case time complexity of O(n log n). This article provides a detailed exploration of both algorithms, examining their mechanics, time complexities, and offering a practical comparison through implementation. #SortingAlgorithms #BubbleSort #QuickSort #TimeComplexity #DataStructures #PythonProgramming
To view or add a comment, sign in
-
𝐒𝐩𝐞𝐞𝐝 𝐮𝐩 𝐋𝐋𝐌 𝐈𝐧𝐟𝐞𝐫𝐞𝐧𝐜𝐞 𝐰𝐢𝐭𝐡 𝐆𝐨𝐨𝐠𝐥𝐞'𝐬 𝐓𝐮𝐫𝐛𝐨𝐐𝐮𝐚𝐧𝐭 Google recently introduced TurboQuant is a compression method that achieves a high reduction in model size with zero accuracy loss,. TurboQuant can be used for both key-value (KV) cache compression and vector search. "TurboQuant-GPU" library helps you to run this compression algorithm You can install this library with pip: 𝐩𝐢𝐩 𝐢𝐧𝐬𝐭𝐚𝐥𝐥 𝐭𝐮𝐫𝐛𝐨𝐪𝐮𝐚𝐧𝐭-𝐠𝐩𝐮 This library is written in cuTile (CUDA 12, 13) with PyTorch fallbacks 𝐋𝐢𝐛𝐫𝐚𝐫𝐲 𝐝𝐞𝐭𝐚𝐢𝐥𝐬 𝐢𝐧 𝐭𝐡𝐞 𝐜𝐨𝐦𝐦𝐞𝐧𝐭𝐬
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Nice 👍 Khushboo . You can also think it with a two pointer approach(as arrays are already sorted), it will optimize your current O(n*logm) complexity to O(n+m).