🚀 DSA Day 16: Diving into Linked Lists! 🔗🧠 After mastering sorting algorithms, it’s time to shift gears from how we organize data to how we store it. Today, I started with Linked Lists! ✅ What is a Linked List? It is a linear data structure where elements aren't stored at fixed memory locations. Instead, each "Node" contains: 1️⃣ Data: The value you want to store. 2️⃣ Next: A pointer (link) to the next node in the line. ⛓️ Types I Explored: 🔹 Singly Linked List: One-way traffic—each node points only to the next. 🔹 Doubly Linked List: Two-way street—nodes point to both the next and the previous ones. 🔹 Circular Linked List: No dead ends—the last node loops back to the start! ⚖️ Array vs. Linked List – The Showdown: Memory: Arrays need a single block of space; Linked Lists can be scattered anywhere in memory. 🧩 Insertion/Deletion: Linked Lists win! You just change a pointer (O(1)) instead of shifting every single element like in an array (O(n)). ⚡ Access: Arrays win here. You can jump to any index instantly (O(1)), whereas in a Linked List, you have to "walk" from the head (O(n)). 🚶♂️ 🎯 The Lesson: Use Arrays for fast lookups; use Linked Lists for frequent adding and removing! 🛠️ On to Day 17! ➡️ #Day16 #JavaScript #DSA #LinkedList #DataStructures #CodingJourney #LearningInPublic #WebDevelopment #Programming #TechBasics
Linked Lists Explained: Arrays vs. Linked Lists
More Relevant Posts
-
🚀 Day 8/100 – #100DaysOfDSA Today’s challenge pushed me to go beyond basic sorting and think about efficient algorithms with optimal time complexity. 🔹 Problem Solved: Sort an Array(Merge sort) (without using built-in functions) 💡 Key Learning: 👉 To achieve O(n log n) time complexity, we need to use advanced sorting algorithms like: Merge Sort (Divide & Conquer) Quick Sort (Partition-based approach) 👉 Approach I focused on: Merge Sort Divide the array into halves Recursively sort each half Merge the sorted halves ✅ Time Complexity: O(n log n) ✅ Stable sorting algorithm ⚠️ Space Complexity: O(n) (extra space required) 🔥 Alternative: Quick Sort Faster in practice (on average) Works in-place ⚠️ Worst case: O(n²) ✅ Average: O(n log n) 🔥 What I learned today: Not all sorting algorithms are equal — choosing the right one depends on constraints like time, space, and input size. Moving from basic → advanced concepts step by step 🚀 #100DaysOfCode #DSA #Sorting #MergeSort #QuickSort #ProblemSolving #CodingJourney #JavaScript #TechGrowth #SoftwareEngineer #LearningInPublic
To view or add a comment, sign in
-
Most RAG tutorials explain the pipeline as: Text → Chunks → Embeddings → Vector DB → Answer. This is correct, but it hides the most important part of the system: the embedding layer. An embedding model doesn’t just "store text." It maps language into a high-dimensional mathematical space where semantic similarity—not keyword matching—drives the results. If your retrieval layer fails, no amount of prompt engineering or model fine-tuning will save your output. Bad context = bad answers. Here are the three low-level decisions that actually determine RAG performance: 1. The Embedding Model I’m using all-MiniLM-L6-v2 via ChromaDB. It’s a powerhouse for local experimentation: fast, compact, and effective. The Trade-off: Smaller models offer lower latency but may miss nuance in highly technical datasets. Larger models capture deeper semantics but demand more memory and compute. Choose based on your domain, not just your benchmarks. 2. Strategic Chunking Embedding models only see the individual chunk, not the full document. If your chunks are too large, the meaning is diluted; too small, and you lose critical context. My Setup: chunk_size = 1000 with chunk_overlap = 150. Why? That 150-token overlap is non-negotiable. It acts as a semantic bridge, preserving continuity across boundaries where information would otherwise be lost. 3. Similarity Metrics In ChromaDB, I use metadata={"hnsw:space": "cosine"}. Since we care about semantic alignment rather than raw vector magnitude, cosine similarity is the standard for ensuring the system matches meaning, not just volume. The bottom line: A strong RAG system isn't just about the LLM—it's about the precision of your retrieval layer. Good RAG starts before the LLM ever sees the prompt. Check out how I’ve implemented these patterns here: 👉 https://lnkd.in/eUNRHYMG Animation mode: https://lnkd.in/eH79k-Mg What’s your go-to strategy for improving retrieval accuracy? Let’s discuss in the comments. 👇 #AIEngineering #RAG #Embeddings #VectorDatabase #ChromaDB #SentenceTransformers #LLM #SemanticSearch #MachineLearning #BuildingInPublic
To view or add a comment, sign in
-
🚀 I just open-sourced claude-reimagined - the bootstrap script I wish existed when I first set up Claude Code. Setting up a real Claude Code workstation isn't `npm install`. It's an afternoon of stitching together the CLI, MCP servers, hooks, plugins, skills, and a dozen settings that nobody documents in one place. So I automated that afternoon. A lean Claude Code setup that cuts token usage (RTK, caveman), offloads heavy output (context-mode), queries code structurally (code-review-graph), auto-selects cost-efficient models (subagent-model-router), and preserves state (pre-compact), with built-in 40+ skills. If you've ever burned a context window watching a build log scroll by, or paid Opus prices for a one-line lookup, this is for you. ⭐ Star it, fork it, break it, send PRs: https://lnkd.in/gzSdkEgH #ClaudeCode #AI #DeveloperTools #Anthropic #OpenSource #LLM #DevProductivity
To view or add a comment, sign in
-
-
🚀 From theory → to visualization 👇 I recently built an Efficient Page Replacement Algorithm Simulator as part of my Operating Systems learning, and I’m excited to share it here! While studying concepts like FIFO, LRU, and Optimal, I realized that understanding them only from textbooks can be confusing. So I decided to build a simulator that shows exactly what happens inside memory step-by-step. 📸 (Screenshot attached below shows the actual simulation output) 💡 What this project does: Simulates FIFO, LRU, and Optimal algorithms Displays step-by-step memory frame updates Clearly shows page hits and page faults Provides graphical comparison for better analysis 🛠 Tech Stack: HTML • CSS • JavaScript • Chart.js Deployed using Vercel 💭 Key Learning: Building this project helped me understand memory management much more deeply. Visualizing concepts makes a huge difference compared to just reading theory. 🌐 Live Demo: https://eparx.vercel.app/ 📂GitHub Repository: https://lnkd.in/g3_pgehs #OperatingSystems #WebDevelopment #JavaScript #StudentProject #LearningByDoing #BuildInPublic #TechProjects
To view or add a comment, sign in
-
Day 08 of My Learning Journey Today I learned about one of the simplest searching algorithms — Linear Search 🔹 What is Linear Search? Linear Search means checking each element one by one until the target is found. 🔹 How it works - Start from the first element -Compare with target -If not match → move to next -Repeat until found or end of list 🔹 Example let arr = [10, 20, 30, 40, 50]; let target = 30; 🔹 Steps: 10 ❌ → 20 ❌ → 30 ✅ -Output: Index = 2 🔹 Code Implementation function linearSearch(arr, target) { for (let i = 0; i < arr.length; i++) { if (arr[i] === target) { return i; } } return -1; } 🔹 Time Complexity ⏱ Best Case → O(1) (found immediately) ⏱ Worst Case → O(n) (search entire array) 🔹 When to Use? ✔ Small datasets ✔ Unsorted arrays ✔ Simple use cases 🔹 When NOT to Use? ❌ Large datasets (slow performance) --Key Takeaway Linear Search is easy to understand and implement, but not efficient for large data. #Day08 #JavaScript #DSA #FrontendDeveloper #LinearSearch #CodingJourney #100DaysOfCode #LearnInPublic #WebDevelopment #ContinueousLearner
To view or add a comment, sign in
-
Day 07: Cracking the "Non-Divisible Subset" Logic 🧩 Today was a true test of algorithmic thinking. I tackled a problem that looks like a standard array search but is actually a brilliant exercise in Number Theory and Remainder Math. The Challenge: Given a set of numbers, find the maximum size of a subset where the sum of any two numbers is not divisible by K The Strategy (Remainder Frequency): Instead of checking every possible pair (which would be very slow), I focused on remainders . If two numbers sum to a multiple of K, their remainders (r1+r2) must sum to K. const s = [19,10,12,10,24,25,22]; const k = 4; function nonDivisibleSubset(k, s) { let freq = new Array(k).fill(0); // count remainders for (let num of s) { freq[num % k]++; } let count = 0; // remainder 0 case if (freq[0] > 0) count++; // check pairs for (let i = 1; i <= Math.floor(k / 2); i++) { if (i === k - i) { // special case when k is even if (freq[i] > 0) count++; } else { count += Math.max(freq[i], freq[k - i]); } } return count; } console.log(nonDivisibleSubset(k,s)) Key Takeaway: When a problem involves divisibility, don't look at the numbers—look at the remainders. It turns a complex pairing problem into a simple counting one! One full week of coding done. The momentum is real! 🚀 #JavaScript #Algorithms #NumberTheory #100DaysOfCode #CodingChallenge #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
-
Built as a team project of three under the guidance of Arjun Saini this started as an Operating Systems assignment — but turned into a fully interactive learning tool. We developed an Efficient Page Replacement Algorithm Simulator that visualizes how FIFO, LRU, and Optimal algorithms actually work. Instead of just studying theory, you can now: 🔹 See memory frames update step-by-step 🔹 Track page hits, faults, and replacements 🔹 Compare algorithm performance in real time 💡 This project helped us move beyond theory and truly understand memory management concepts. ⚙️ Tech Stack: React + Tailwind CSS + Chart.js 🔗 Live Demo: https://lnkd.in/gMb33Gk8� 🔗 GitHub Repo: https://lnkd.in/gBuGegeh� 👥 Team: Babul Kumar Aswathi Ashokan M 🙏 Grateful to Arjun Saini for guidance and support throughout the project. Would love your feedback! 🙌 #OperatingSystems #ReactJS #WebDevelopment #TeamProject #Learning #ComputerScience
To view or add a comment, sign in
-
🚀 Day 10 of DSA Practice – Flattening the Chaos! At first glance, flattening an array seems simple… but what happens when it’s nested infinitely? 🤯 🔍 Problem: Flatten a nested array into a single level 👉 Example: [1, [2, 3], [4, [5, 6]]] → [1, 2, 3, 4, 5, 6] 💭 My Approach: 1️⃣ Understand how nested structures behave 2️⃣ Handle elements at any depth 3️⃣ Make the solution work for all edge cases 🧠 Key Takeaway: When dealing with nested data: Break the problem into smaller parts Think recursively or iteratively ⚡ Realization: It’s not just about flattening one array… It’s about handling infinite levels of complexity 🔥 💻 Check out my implementation: 🔗 GitHub – Day 10: Flatten Array 💬 Question for you: What’s your favorite trick for handling deeply nested data structures? #DSA #JavaScript #CodingJourney #Recursion #100DaysOfCode #FrontendDeveloper #ProblemSolving #TechTips
To view or add a comment, sign in
-
-
🚀 DSA Day 17: From Theory to Code – Building the Foundation 🏗️ Today was a busy one, but I made sure to squeeze in some growth time. I focused on the "blueprint" of a Singly Linked List: ✅ The Node Factory: Created a constructor to give every piece of data its own value and a next pointer. ✅ The Structure: Initialized my MyLinkedList class with a head, tail, and a length tracker to keep things organized. It’s a simple start, but it’s the skeleton that makes everything else possible. 🔜 Coming up next: The "heavy lifting" begins! I’ll be tackling: 🔹 Get: How to find a value at a specific index (the "walk"). 🔹 Insert: Adding new nodes without breaking the chain. 🔹 Delete: Removing nodes and re-linking the pointers. The goal isn't just to write the code, but to visualize how those pointers shift in memory. 🧠✨ #Day17 #JavaScript #DSA #CodingJourney #LinkedList #LearningInPublic #WebDev #DataStructures #Consistency
To view or add a comment, sign in
-
🚀 Day 6/100 – #100DaysOfDSA Today’s focus was on searching vs sorting and understanding efficiency differences. 🔹 Problems Solved: 1. Binary Search 2. Bubble Sort 💡 Key Learnings: 👉 Problem 1: Binary Search Works only on sorted arrays Divide the search space into half each time 👉 Approach: Find mid index Compare with target Move left or right accordingly ✅ O(log n) Time Complexity ✅ Very efficient for large datasets 👉 Problem 2: Bubble Sort Most people implement Bubble Sort, but today I learned how to optimize it using an early break condition 🚀 👉 Approach: If no swaps happen in a pass, the array is already sorted — so we can stop early instead of continuing unnecessary iterations. ✅O(n) (Optimized with swap flag) 🔥 What I learned today: Choosing the right algorithm matters more than just solving the problem. Consistency continues 💪 Day 6 done! #100DaysOfCode #DSA #BinarySearch #Sorting #LeetCode #ProblemSolving #CodingJourney #JavaScript #TechGrowth #SoftwareEngineer #LearningInPublic
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development