Day 91 of my #100DaysOfCode journey 🚀 Today I solved Cycle Detection in an Undirected Graph using Breadth First Search (BFS). Problem intuition: In an undirected graph, a cycle exists if during traversal we reach an already visited node that is not the parent of the current node. Approach: • Traverse all graph components • Use a queue for BFS • Store both the current node and its parent in the queue • If a visited neighbor is found that is not the parent → cycle exists Key insight: In undirected graphs, revisiting the parent is normal. But revisiting any other already visited node indicates a cycle. Concepts reinforced: • BFS on graphs • Parent tracking • Connected components • Cycle detection logic Time Complexity: • O(V + E) → where V = vertices, E = edges This is one of the most important graph interview patterns because the same idea extends into: • DFS cycle detection • Detecting cycles in connected/disconnected graphs • Advanced graph traversal problems Slowly building stronger graph intuition one pattern at a time. 🌱 #100DaysOfCode #DSA #Graphs #BFS #CycleDetection #Algorithms #Python #CodingJourney #ProblemSolving #SoftwareEngineering
Cycle Detection in Undirected Graphs with BFS
More Relevant Posts
-
Day 22/100 – DSA Journey Problem: Find Mode in Binary Search Tree (BST) Today’s problem focused on understanding how Binary Search Trees (BST) behave and how we can efficiently extract useful insights from them. Understanding the BST A Binary Search Tree follows a structured property: Left subtree → values ≤ root Right subtree → values ≥ root Because of this, when we perform an Inorder Traversal (Left → Root → Right), the values are visited in sorted order. Why Inorder Traversal? Since duplicates appear consecutively in a sorted sequence, inorder traversal allows us to: Track frequency without using extra space Compare current value with previous value Efficiently determine the most frequent element (mode) Approach Used Traverse the BST using inorder traversal Maintain: Previous value Current count Maximum frequency Update result list whenever a new maximum frequency is found Key Learning This problem highlights how leveraging tree properties can help optimize solutions. Instead of using extra space (like hashmaps), we used traversal behavior to achieve an efficient solution. Conclusion Understanding the underlying structure of data (like BST properties) is often more powerful than brute-force approaches. Smart traversal choices can significantly reduce space complexity and improve performance. #Day22 #100DaysOfCode #DSA #BinarySearchTree #Python #CodingJourney #LeetCode #ProblemSolving
To view or add a comment, sign in
-
-
Day 93 of my #100DaysOfCode journey 🚀 Today I implemented Cycle Detection in an Undirected Graph using DFS. Earlier I solved this using BFS, and today I explored the DFS approach, which is equally important for interviews. Problem intuition: A cycle exists if during traversal we encounter a visited node that is not the parent. Approach: • Traverse all components of the graph • Use DFS (recursive traversal) • Keep track of the parent node • If a visited neighbor is found and it's not the parent → cycle detected Key insight: In undirected graphs: • Visiting parent again → normal • Visiting any other visited node → cycle Concepts reinforced: • DFS traversal on graphs • Recursion with parent tracking • Handling disconnected components Time Complexity: • O(V + E) Now I’ve covered cycle detection using both: • BFS (queue + parent tracking) • DFS (recursion + parent tracking) Understanding both approaches builds strong graph intuition and flexibility in problem-solving. Step by step, mastering graph patterns. 🌱 #100DaysOfCode #DSA #Graphs #DFS #CycleDetection #Algorithms #Python #CodingJourney #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-
-
Day 92 of my #100DaysOfCode journey 🚀 Today I implemented Cycle Detection in an Undirected Graph using DFS. Earlier I solved this using BFS, and today I explored the DFS approach, which is equally important for interviews. Problem intuition: A cycle exists if during traversal we encounter a visited node that is not the parent. Approach: • Traverse all components of the graph • Use DFS (recursive traversal) • Keep track of the parent node • If a visited neighbor is found and it's not the parent → cycle detected Key insight: In undirected graphs: • Visiting parent again → normal • Visiting any other visited node → cycle Concepts reinforced: • DFS traversal on graphs • Recursion with parent tracking • Handling disconnected components Time Complexity: • O(V + E) Now I’ve covered cycle detection using both: • BFS (queue + parent tracking) • DFS (recursion + parent tracking) Understanding both approaches builds strong graph intuition and flexibility in problem-solving. Step by step, mastering graph patterns. 🌱 #100DaysOfCode #DSA #Graphs #DFS #CycleDetection #Algorithms #Python #CodingJourney #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 Cracked “Top K Frequent Elements” with an Optimal Approach! Today I solved one of the most important interview problems on LeetCode using an efficient Bucket Sort approach (O(n)) — and got it accepted ✅ 🔍 Problem Insight: Instead of sorting (which takes O(n log n)), I used frequency as an index to directly access elements with higher occurrence. 💡 Key Learnings: How to reduce time complexity from O(n log n) → O(n) Using hashmaps (Counter) for frequency counting Applying bucket sort for optimization Writing clean and interview-ready code ⚡ Complexity Analysis: Time Complexity: O(n) (Frequency count + bucket fill + traversal) Space Complexity: O(n) (Hashmap + bucket storage) ⚡ Performance: Runtime: 10 ms 🧠 Approach Summary: Count frequency of elements Store elements in buckets based on frequency Traverse from highest frequency to get top K elements 📌 Consistency > Perfection Every problem solved is one step closer to mastering DSA. #DataStructures #Algorithms #Python #LeetCode #CodingJourney #ProblemSolving #TechGrowth #Consistency #Learning #DSA
To view or add a comment, sign in
-
-
Day 14/100 – Data Structures & Algorithms Today, I worked on the problem “First Unique Character in a String.” Overview The task is to identify the first non-repeating character in a string and return its index. If no such character exists, the result is -1. Approach I used a two-pass strategy: • First pass to store character frequencies using a hashmap • Second pass to identify the first character with a frequency of one Complexity • Time Complexity: O(n) • Space Complexity: O(1) Key Takeaway This problem reinforces how effective hashmaps are for frequency-based problems and how a simple two-pass approach can lead to optimal solutions. Staying consistent and building problem-solving intuition step by step. #Day14 #100DaysOfCode #DSA #Python #LeetCode #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-
-
Vector databases are great, but they aren't always the right tool for complex document intelligence. 🧠📉 If you are tired of context fragmentation and untraceable LLM hallucinations, it is time to look at Vectorless RAG with Page Index. By swapping out mathematical embeddings for a reasoning-based, hierarchical document tree, you can achieve upwards of 98% accuracy on complex Q&A tasks with perfect citation traceability. I wrote a complete guide on how this architecture works, including a full Python code implementation. Read it here: https://lnkd.in/gRuXiSxK #ArtificialIntelligence #RAG #PythonDeveloper #MachineLearning #AIEngineering
To view or add a comment, sign in
-
-
Data collection series · Post 07 Imputation strategies — beyond filling with the mean "Mean imputation is fast. It's also wrong in most cases. Here are 4 better strategies and exactly when to use each." Filling missing values with the mean is fast. It's also quietly wrong in most cases. Here are 4 better strategies — and exactly when to use each. ▼ Mean imputation is the default. Everyone learns it first. It's one line of code. It ships fast. But it has a serious flaw: It collapses variance. Replace 500 missing values with the mean — and your distribution gets an artificial spike right in the middle. Your correlations weaken. Your model learns a distorted world. There are better options. Here's the practical guide. --- #Python #DataScience #DataQuality #DataCleaning #Analytics #DataAnalyst #DataAnalytics #DataEngineering #Imputationstrategies
To view or add a comment, sign in
-
Optimization > Brute Force. While auditing a geographic news representation gap in the GDELT 2.0 database, I hit a common wall: memory overhead. Processing high-volume logs isn't just about writing code; it’s about writing code that doesn't crash your environment. For this project, I implemented Pandas chunking and vectorized NumPy operations to handle data ingestion more efficiently. The result: Reduced peak RAM overhead by 60%. Quantified a 0.09x representation factor for India in global news. Research isn’t just about the 'What' (the bias); it’s about the 'How' (the architecture). #DataScience #Python #GDELT #MachineLearning #Research
To view or add a comment, sign in
-
Weekly Challenge 9: TSP With Farthest Insertion. How do you find the shortest route to visit multiple locations without wasting fuel or time. This is known as the Traveling Salesperson Problem (TSP), one of the most famous challenges in computer science and Operations Research. Since finding the "perfect" route by checking every combination takes too long, we use Heuristics to find highly optimized routes in milliseconds. For Week 9 of my Python challenge, I built a spatial heuristic from scratch: > 1️⃣ Generated random nodes (cities) on a 2D plane. > 2️⃣ Calculated their Euclidean distance from the origin. > 3️⃣ Programmed an **Insertion Sort** algorithm to sort the nodes by distance. > 4️⃣ Compared the random route vs. the optimized route. > 📉 The Result: As you can see in the graph below, just by applying this sorting logic, the route distance is drastically reduced (saving over 30% in travel distance in most random scenarios!). Data visualization makes optimization beautiful. Full source code on my GitHub: https://lnkd.in/epZBxUnQ #Python #Optimization #OperationsResearch #DataScience #Matplotlib #Algorithms #CodingChallenge
To view or add a comment, sign in
-
🚀 𝗗𝗮𝘆 𝟯 : 𝗧𝗼𝗱𝗮𝘆 𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲𝗱 𝘀𝗼𝗺𝗲 𝗯𝗮𝘀𝗶𝗰 𝗯𝘂𝘁 𝘃𝗲𝗿𝘆 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀 𝗶𝗻 𝗣𝗮𝗻𝗱𝗮𝘀 𝗳𝗼𝗿 𝗱𝗮𝘁𝗮 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 📊 🔍 1. head() Shows the first 5 rows of the dataset df.head() 🔍 2. tail() Shows the last 5 rows df.tail() 📏 3. shape Returns number of rows and columns df.shape ℹ️ 4. info() Provides summary of dataset (data types, null values) df.info() 📊 5. describe() Gives statistical summary (mean, min, max, etc.) df.describe() 📌 6. columns Shows all column names df.columns 💡 Key Learning: Understanding your dataset is the first step before doing any analysis. #Day3 #Pandas #Python #DataAnalytics #LearningJourney #DataExploration
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development