📚🌃 Continuing my dive into data structures and algorithms. 🙂 🕸️ Tonight’s Focus: Chapter 20 – Graph Traversal (DFS & BFS) Unlike trees, graphs aren’t always hierarchical. They can be messy, interconnected, and unpredictable. So how do we explore them? With two trusty tools: Depth First 🐋 and Breadth First 🧭 Search ✅ FYI Graphs can represent anything from social networks to maps to recommendation systems We explore them to uncover all reachable nodes and understand their connections At the start, we only see one node. The rest are hidden until discovered ⚙️ Traversal Basics Each node goes through two phases Discovered Collection – Nodes are added here as soon as they’re found Explored Collection – Once all discovered neighbors of a node have been checked, the node moves here 🐋 Depth First Search (DFS) Uses a stack (Last In, First Out) Goes deep into one path before backtracking Newly discovered nodes are added to the beginning of the list Example path: A → B → C → E → F → G… 🧭 Breadth First Search (BFS) Uses a queue (First In, First Out) Explores level by level, checking all immediate neighbors before moving outward Newly discovered nodes are added to the end of the list Example path: A → B → C → D → E… ⚙️ Discovery Order DFS: Order depends on how neighbors are listed and pushed BFS: Order follows the queue. First discovered, first explored ⚡ Performance Time: O(n) Space: O(n) Same across best, average, and worst cases. Depends on graph size and structure 📚 These chapters are getting deeper. Might split them up going forward If you’re learning too (or just love emoji-powered breakdowns), follow along for more chapters in this series 🚀 #JavaScript #Algorithms #Coding #DevNotes
"Exploring Graphs with DFS and BFS"
More Relevant Posts
-
📚🌃 Continuing my dive into data structures and algorithms. 🙂 🔍 Tonight’s focus: Chapter 24: Selection Sort ⚙️ How Selection Sort Works -Assume the first item is the smallest. -Scan the entire list to find the actual smallest item. 🔎 -Swap it with the first item and add it to the sorted region. -Move to the next unsorted item and repeat: ----Assume it's the smallest. ----Compare it with the rest of the unsorted region. ----Swap the new smallest into place in the sorted region. -Continue until all items are sorted! ✅ Key Notes -Splits the list into a sorted region (usually at the front) and an unsorted region. -Swaps each new smallest item into place, expanding the sorted region one item at a time. -Simple to understand and implement, but not very efficient. ⚡ Performance -Time: O(n²) -Space: The items sort in their place, so memory usage is minimal. -Not ideal for large datasets. -Slightly less practical than Insertion Sort, but still a good learning tool. 💻 Implementation Tips -Outer loop (i): tracks the boundary between sorted and unsorted. -Inner loop (j): finds the smallest item in the unsorted region. -Swap smallest item with the first unsorted item. If you're learning too, or just love a good emoji-powered breakdown, follow along for more chapters in this series! 🚀 #JavaScript #Algorithms #Coding #DevNotes
To view or add a comment, sign in
-
📚🌃 Continuing my dive into data structures and algorithms. 🙂 🌳 Tonight’s Focus: Chapter 19 – Binary Tree Traversal In linear structures like arrays or linked lists, we move step-by-step: 0️⃣ ➡️ 1️⃣ ➡️ 2️⃣ ➡️ 3️⃣ But trees are hierarchical 🌳, so we use a different approach: Breadth-First 🔺 and Depth-First 🐋 Traversal ✅ FYI -Tree depth helps us understand how far a node is from the root -The goal is to visit every node and represent the full structure ⚙️ Traversal Basics Each node goes through two phases: Discovered Collection – We identify a node (starting from the root) and add it to this list as soon as it's found. Explored Collection – After a node is discovered, we examine its children. Once all its children have been discovered, we move the node to this list. 🔺 Breadth-First Traversal -Uses a queue (First In, First Out) -Visits nodes level by level, left to right, moving nodes from the discovered to explored collection as they are processed -Example order: A → B → C → D → E… 🐋 Depth-First Traversal -Uses a stack (Last In, First Out) -Nodes are discovered by traversing deep down the left-most path, then backtracked to the nearest unexplored node. During processing, nodes are moved from the discovered collection to the explored collection. ⚡ Performance Time: O(n) Space: O(n) Same across best, average, and worst cases 📚 Might just do half a chapter for the more involved chapters next. If you’re learning too (or just love emoji-powered breakdowns), follow along for more chapters in this series! 🚀 #JavaScript #Algorithms #Coding #DevNotes
To view or add a comment, sign in
-
𝑷𝒚𝒕𝒉𝒐𝒏 𝑳𝒊𝒃𝒓𝒂𝒓𝒊𝒆𝒔 & 𝑭𝒓𝒂𝒎𝒆𝒘𝒐𝒓𝒌𝒔 𝐍𝐮𝐦𝐏𝐲: Transform your data with high-performance numerical operations. 𝐏𝐚𝐧𝐝𝐚𝐬: Master data manipulation and analysis effortlessly. 𝐌𝐚𝐭𝐩𝐥𝐨𝐭𝐥𝐢𝐛: Visualize data like a pro with this versatile plotting library. 𝐑𝐞𝐪𝐮𝐞𝐬𝐭𝐬: Simplify HTTP requests for seamless API integration. 𝐅𝐥𝐚𝐬𝐤: Build robust web applications with this lightweight framework. 𝐃𝐣𝐚𝐧𝐠𝐨: Power up your web development projects with a full-fledged framework. 𝐁𝐞𝐚𝐮𝐭𝐢𝐟𝐮𝐥 𝐒𝐨𝐮𝐩: Web scraping made easy – parse HTML and XML effortlessly. 𝐓𝐞𝐧𝐬𝐨𝐫𝐅𝐥𝐨𝐰: Dive into machine learning with this open-source library. 𝐏𝐲𝐓𝐨𝐫𝐜𝐡: Empower your neural network projects with this dynamic library. 𝐒𝐜𝐢𝐤𝐢𝐭-𝐥𝐞𝐚𝐫𝐧: Implement efficient machine learning algorithms with ease. 𝐒𝐐𝐋𝐀𝐥𝐜𝐡𝐞𝐦𝐲: Craft SQL queries effortlessly with this SQL toolkit. 𝐅𝐚𝐬𝐭𝐀𝐏𝐈: Develop APIs quickly with high performance and auto-generated docs. 𝐂𝐞𝐥𝐞𝐫𝐲: Supercharge your Python apps with distributed task queues. 𝐏𝐲𝐠𝐚𝐦𝐞: Unleash your creativity by building games with this library. 𝐓𝐰𝐢𝐬𝐭𝐞𝐝: Develop event-driven networking applications seamlessly. Credit - thealpha 𝐅𝐨𝐥𝐥𝐨𝐰 𝐮𝐬 𝐨𝐧 𝐈𝐧𝐬𝐭𝐚𝐠𝐫𝐚𝐦 👉🏻https://lnkd.in/ehA5ePqX 𝐅𝐨𝐥𝐥𝐨𝐰 𝐮𝐬 𝐨𝐧 𝐋𝐢𝐧𝐤𝐞𝐝𝐈𝐧 👉🏻 https://lnkd.in/e2sq98PN https://lnkd.in/e-9dJf8i 𝐅𝐨𝐥𝐥𝐨𝐰 𝐮𝐬 𝐨𝐧 𝐅𝐚𝐜𝐞𝐛𝐨𝐨𝐤 👉🏻 https://lnkd.in/eWcXVwAt
To view or add a comment, sign in
-
-
📚🌃 Continuing my dive into data structures and algorithms. 🙂 🫧 🔁 🫧 Tonight’s focus: Chapter 22: Bubble Sort ❌ Bubble Sort is one of the simplest sorting algorithms, but also one of the most inefficient. It’s often taught not because it’s fast, but to help you recognize and avoid sluggish 🐌 sorting patterns in your own code. ⚙️ How Bubble Sort Works -Start at the beginning of the list. -Compare each pair of adjacent elements. -If the first is greater than the second, swap them. -Move one step forward and repeat the comparison. -Once you reach the end, go back to the beginning and repeat the process. -Continue this cycle until all items are sorted. -When it runs a full pass with no swaps, the list is officially sorted. ✅ Key Notes -Its best-case scenario is a sorted list, ironically, the one time you don’t need a sort. -It’s slowest when the list is in reverse order because every element needs to be moved. ⚡🐢🐌 Performance -Time complexity: Worst and average case is O(n²). -Even small lists can take many steps, sorting 4 numbers might involve 8+ steps. -Requires multiple passes through the list. It will still pass through elements that have already been sorted, so it’s inefficient for large datasets. If you're learning too, or just love a good emoji-powered breakdown, follow along for more chapters in this series! 🚀 #JavaScript #Algorithms #Coding #DevNotes
To view or add a comment, sign in
-
Just shared a new article about how to build a secure, offline (air-gapped) vector search setup using Elasticsearch — and transform it into a real enterprise-grade Vector Database. Here’s what I covered: 💡 Uploading ML models like sentence-transformers/all-MiniLM-L12-v2 with Eland ⚙️ Building ingest pipelines to automatically create embeddings 📦 Setting up dense_vector mappings for semantic search fields 🔍 Running k-NN vector queries to find results by meaning 🔒 Designing for air-gapped (offline) environments — key for security and compliance This setup shows how companies can move beyond keyword search and build systems that truly understand context and intent — even without internet access. 👉 Read the full article here:
To view or add a comment, sign in
-
📚🌃 Continuing my dive into data structures and algorithms. 🙂 🃏↪️🃏Tonight’s focus: Chapter 23: Insertion Sort Insertion Sort is like organizing cards 🃏 in your hand one at a time, you compare and place each card in its correct spot. ⚙️ How Insertion Sort Works -Start with the second item in the list (the first item is considered sorted). -Mark it as the active item. 📌 -Compare the active item to the items on its left (aka the sorted region). -Keep shifting left until you find a value smaller than the active item, then insert it to the right. -Repeat for each item in the unsorted region, reassigning the active item 📌 each time. ✅ Key Notes -Follows a “look left” approach: each new active item 📌 is compared to those before it, one by one, moving toward the first index, until it finds a value that is less than itself. ⚡ Performance -Time: O(n²) in the average and worst case. Best case (already sorted): O(n). -Space: Uses constant memory, no extra space needed. -Not ideal for large datasets, but useful when memory is limited, the list is mostly sorted. -Ok for small data sets. If you're learning too, or just love a good emoji-powered breakdown, follow along for more chapters in this series! 🚀 #JavaScript #Algorithms #Coding #DevNotes
To view or add a comment, sign in
-
🚀 It All Starts with Arrays A few days ago, I was solving a LeetCode problem, one of those “remove duplicates from a sorted list” kind of questions. It looked easy. My brain instantly screamed: “Use a Set!” Well… the test cases failed miserably 😅 That’s when I noticed the question mentioned an in-place algorithm, meaning you had to modify the data directly without extra space. That was my lightbulb moment. I realised that almost every data structure I’ve ever used traces back to the same thing: arrays. Think about it 👇 HashMaps? Arrays with hashed indices. Linked lists? Chained memory blocks. Trees and graphs? Fancy linked lists. Tries? Linked lists hiding behind hash maps. It’s funny how the “basic stuff” quietly powers everything we call advanced. So, next time you’re debugging a complex structure or optimizing performance, give arrays and linked lists a little nod. They’re the real OGs. 🧠 Read the full post here: https://lnkd.in/dxZr9qG7 #SoftwareEngineering #DataStructures #BackendDevelopment #LearningInPublic #LeetCode
To view or add a comment, sign in
-
A Primer on Web Scraping in R: From Origins to Real-World Applications In today’s data-driven world, insights are often hidden behind vast amounts of unstructured information scattered across the web. For a data scientist, manually visiting each webpage to extract relevant information is inefficient and nearly impossible at scale. This is where web scraping — an automated method to collect and structure web data — becomes an indispensable tool. Using R, one of the most powerful languages for statistical computing, web scraping can be performed efficiently through dedicated packages such as rvest, xml2, and selectr. These tools simplify the process of extracting information from HTML pages and converting it into structured datasets ready for analysis. This article explores the origins of web scraping, demonstrates its use in R, and provides real-world examples and case studies of how it’s transforming industries. Origins of Web Scraping As websites grew more complex — integrating CSS, JavaScript, and dynamic elements — the need for more sophisticated scrap https://lnkd.in/dE6jTmdC
To view or add a comment, sign in
-
A Primer on Web Scraping in R: From Origins to Real-World Applications In today’s data-driven world, insights are often hidden behind vast amounts of unstructured information scattered across the web. For a data scientist, manually visiting each webpage to extract relevant information is inefficient and nearly impossible at scale. This is where web scraping — an automated method to collect and structure web data — becomes an indispensable tool. Using R, one of the most powerful languages for statistical computing, web scraping can be performed efficiently through dedicated packages such as rvest, xml2, and selectr. These tools simplify the process of extracting information from HTML pages and converting it into structured datasets ready for analysis. This article explores the origins of web scraping, demonstrates its use in R, and provides real-world examples and case studies of how it’s transforming industries. Origins of Web Scraping As websites grew more complex — integrating CSS, JavaScript, and dynamic elements — the need for more sophisticated scrap https://lnkd.in/dE6jTmdC
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development