🌞 Day 44 – LeetCode 75 ✅ 547. Number of Provinces Today’s problem was about finding how many connected components (provinces) exist in a graph represented using an adjacency matrix. Approach : I treated the matrix as a graph problem: - Each city = a node - isConnected[i][j] = 1 means there is a connection (edge) So the goal becomes: - Count how many disconnected groups exist. DFS Approach : - Maintain a visited[] array to track visited cities Loop through each city: - If not visited → it means a new province - Run DFS to mark all cities connected to it Complexity : - Time: O(n²) → because we scan adjacency matrix - Space: O(n) → visited array + recursion stack Key Insight : This is a classic Connected Components in Graph problem. Even though it looks like a matrix problem, it’s really: - How many disconnected graphs exist? Once you see that pattern, DFS/BFS becomes very natural. Restarting consistency again — building momentum one graph problem at a time 🚀 #LeetCode75 #Day44 #LeetCode #DSA #JavaScript #Graph #DFS #ConnectedComponents #ProblemSolving #Coding #LearningInPublic #Consistency
LeetCode 547 Number of Provinces DFS Approach
More Relevant Posts
-
🌞 Day 46 – LeetCode 75 ✅ 399. Evaluate Division Approach : This is a Graph + DFS (Weighted edges) problem. I treated each equation as a bidirectional weighted graph: -> a / b = val → edge a → b (val) -> b / a = 1/val → edge b → a (1/val) Then for each query: -> If either node doesn’t exist → return -1 -> Otherwise run DFS from source to destination -> Multiply weights along the path -> Use visited set to avoid cycles Complexity : -> Time: O(Q × (V + E)) -> Space: O(V + E) Key Insight : -> Convert equations into a weighted graph and use DFS to compute product along paths. -> Once graph is built, each query becomes a traversal problem. #LeetCode75 #Day46 #LeetCode #DSA #JavaScript #Graph #DFS #Backtracking #ProblemSolving #Consistency #TUF
To view or add a comment, sign in
-
-
🚀 Day 6/100 – #100DaysOfDSA Today’s focus was on searching vs sorting and understanding efficiency differences. 🔹 Problems Solved: 1. Binary Search 2. Bubble Sort 💡 Key Learnings: 👉 Problem 1: Binary Search Works only on sorted arrays Divide the search space into half each time 👉 Approach: Find mid index Compare with target Move left or right accordingly ✅ O(log n) Time Complexity ✅ Very efficient for large datasets 👉 Problem 2: Bubble Sort Most people implement Bubble Sort, but today I learned how to optimize it using an early break condition 🚀 👉 Approach: If no swaps happen in a pass, the array is already sorted — so we can stop early instead of continuing unnecessary iterations. ✅O(n) (Optimized with swap flag) 🔥 What I learned today: Choosing the right algorithm matters more than just solving the problem. Consistency continues 💪 Day 6 done! #100DaysOfCode #DSA #BinarySearch #Sorting #LeetCode #ProblemSolving #CodingJourney #JavaScript #TechGrowth #SoftwareEngineer #LearningInPublic
To view or add a comment, sign in
-
🌞 Day 45 – LeetCode 75 ✅ 1466. Reorder Routes to Make All Paths Lead to City 0 Approach : This is a DFS + Graph direction tracking problem. Build graph with direction info: -> (u → v, 1) → original direction (needs change) -> (v → u, 0) → correct direction Then: -> Start DFS from node 0. -> Visit all reachable nodes. -> Add 1 whenever we traverse an edge in the wrong direction Complexity : -> Time: O(n) -> Space: O(n) Key Insight : -> Instead of just storing edges, we store direction cost and sum it during DFS. #LeetCode75 #Day45 #LeetCode #DSA #JavaScript #Graph #DFS #ProblemSolving #Consistency #TUF
To view or add a comment, sign in
-
-
Sometimes the biggest bottleneck in your code isn't the algorithm itself, but how the language handles memory. 💻 Just crushed a complex query problem with a 100% runtime. To get the execution time this low (389ms), I had to step away from standard JavaScript practices and optimize for the machine: Instead of creating new arrays and triggering heavy GC pauses, I allocated a BigUint64Array exactly once and overwrote it. By combining this with Square Root Decomposition for modular arithmetic, the processing time plummeted. A great reminder that when dealing with large datasets, thinking about memory allocation and data types is just as important as the logic itself. #CodingJourney #JavaScript #DataEngineering #Optimization #LeetCode
To view or add a comment, sign in
-
-
🚀 Day 21 of #DevDSA 🔹 Problem: Merge Two Sorted Arrays (Using Extra Space) Today I worked on merging two sorted arrays into a single sorted array using an additional array. 🧠 Approach: Used two pointers (l for nums1 and r for nums2) Compared elements one by one Pushed the smaller element into a new mergedArray Handled equal elements by pushing both Continued until one array is exhausted 💡 Key Idea: Since both arrays are already sorted, we can efficiently merge them in linear time without sorting again. ⏱ Time Complexity: O(m + n) 📦 Space Complexity: O(m + n) 📌 What I learned: Two-pointer technique is very powerful for sorted data Writing clean conditions avoids unnecessary complexity Always think about edge cases (like remaining elements after loop) 💻 Next Step: Try solving the same problem without extra space (in-place optimization 🔥) #DSA #100DaysOfCode #CodingJourney #JavaScript #InterviewPrep
To view or add a comment, sign in
-
-
Solved one of the most interesting stack-based problems recently: Maximum Subarray Min-Product. At first glance, it looks like a variation of subarray problems, but the real challenge lies in combining multiple concepts efficiently: Monotonic Stack to determine the next smaller element boundaries Prefix Sum to compute subarray sums in constant time BigInt handling in JavaScript to safely manage large intermediate values The key insight was realizing that for every element, we can treat it as the minimum of a subarray and expand left and right until a smaller element blocks it. Using a monotonic increasing stack allows us to compute this in linear time. What made this problem particularly valuable: Reinforced how stack patterns extend beyond classic histogram problems Highlighted the importance of boundary management in array problems Demonstrated practical use of BigInt in algorithmic challenges Time Complexity: O(n) Space Complexity: O(n) Problems like this are a great reminder that mastering patterns, not just problems, is what builds real problem-solving ability. If you're working on Data Structures and Algorithms, this is definitely a problem worth understanding deeply. #DataStructures #Algorithms #DSA #Coding #Programming #SoftwareEngineering #JavaScript #ProblemSolving #CompetitiveProgramming #LeetCode #TechLearning #DeveloperJourney #CodeNewbie #LearnToCode #InterviewPrep #CodingInterview #ComputerScience #100DaysOfCode
To view or add a comment, sign in
-
-
LeetCode Day 13 : Problem 42 (Trapping Rain Water) Just solved another LeetCode problem. It was "Trapping Rain Water", sounds like a physics problem, right? But here's what I actually learned: The core insight, for any position, the water it can hold is limited by the shorter of the two tallest walls surrounding it. min(maxLeft, maxRight) - height[i]. Once that clicked, the solution wrote itself. My first approach used two extra arrays, one for the tallest bar to the left of each position, one for the right. Three passes total. O(n) time, O(n) space. Clean and readable. Then I learned the O(1) space version using two pointers. Instead of pre-building arrays, move from both ends toward the middle. Whichever side is shorter determines the water level, process that side and move inward. Same result, no extra arrays. The pattern I keep seeing, when you need both left and right context, either do two passes and store results, or use two pointers and process on the fly. Same idea, different tradeoff between readability and space. Thirteen problems in. The two pass and two pointer patterns keep showing up in different forms. The more problems you solve, the more you recognise the shape of the solution before you write a single line. The real lesson? Understand why the brute force works first. The optimal solution is just the brute force with the extra space removed. #DSA #LeetCode #JavaScript #CodingJourney #Programming
To view or add a comment, sign in
-
-
🚀 Day 23 of #60DaysOfDSA Today I implemented Quick Sort, a powerful Divide & Conquer algorithm used for sorting. 🔍 What I learned: Quick Sort works by selecting a pivot element It partitions the array into: Elements smaller than pivot Elements greater than pivot Then recursively sorts both parts ⚡ Key Insight: “Partitioning is the heart of Quick Sort.” Even a small mistake in index handling or swapping can completely break the logic. 💻 Key Concepts Covered: ✔️ Pivot selection (last element) ✔️ Partition logic (Lomuto method) ✔️ Recursion ✔️ In-place sorting 🧠 Takeaway: Quick Sort is not just about writing code, it’s about understanding how elements move and how recursion divides the problem. Consistency > Perfection 🚀 One step closer to becoming better every day! #DSA #QuickSort #CodingJourney #JavaScript #ProblemSolving
To view or add a comment, sign in
-
-
With Opus 4.7, Claude Code sessions became 30% more expensive. Anthropic didn't change the price, but they changed the tokenizer. The tokenizer is the model's ingestion layer that converts text into tokens before processing, and tokens are what you're billed for. This new tokenizer produces more tokens for the same input. The migration guide says "roughly 1.0 to 1.35x more tokens for the same input", but third-party measurements put it at 1.47x for technical docs and 1.325x as a weighted average across real Claude Code sessions. Character-to-token efficiency dropped from 4.33 chars/token to 3.60. Code gets hit hardest: TypeScript at 1.36x! Python at 1.29x, English prose at 1.20x The billing numbers are the same, but consumption changed, and on top of that Opus 4.7 tends to generate longer outputs on agentic tasks. The stated upside is better instruction following, +5 points on IFEval strict compliance (85% to 90%). Small effect, small benchmark.
To view or add a comment, sign in
-
-
Day 112 of #200DaysOfCode Leveling up Consistency continues, one concept at a time. Today I solved the "Memoize" problem on LeetCode using closures + caching in JavaScript. Key Idea: Avoid recomputing the same function call by storing previously calculated results. Approach: • Use a Map as cache storage • Convert arguments into a unique key using JSON.stringify() • If result already exists return cached value • Otherwise compute, store, and return result Concepts Used: • Closures • Memoization • Map • Higher Order Functions Time Complexity: • First Call → Depends on function • Repeated Calls → O(1) average lookup Space Complexity: O(n) Takeaway: Memoization is a powerful optimization technique that trades space for speed and is heavily used in Dynamic Programming and performance optimization. Learning not just to solve problems — but to make solutions smarter Let’s keep building #Day112 #200DaysOfCode #LeetCode #JavaScript #Memoization #Closures #CodingJourney #ProblemSolving #KeepGoing
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development