Day 25 of 100 Completed Today connected cycle detection with arrays and wrapped up the Titanic EDA work. • #287 - Find the Duplicate Number (Medium) - solved • Completed Titanic dataset EDA 🔎 Focus Areas • Applying cycle detection in arrays (Floyd’s Algorithm) • Understanding value-index mapping as a linked structure • Finalizing insights from a real-world dataset 💡 Key Takeaways (DSA) 📌 #287 Find the Duplicate Number This problem looks like an array problem, but it’s actually a hidden linked list: treat index → value as a pointer detect cycle using slow-fast pointers the meeting point helps find the duplicate Key insight: some problems disguise known patterns - recognizing them is the real skill. 🚀 Python + EDA (Titanic Dataset) Completed the EDA process on the Titanic dataset. 💡 Key Takeaways (Python) • Able to clean, explore, and visualize data end-to-end • Better understanding of relationships between features • More confidence in using Pandas + visualization libraries together ⚡ Honest Reflection This was a milestone day. Finishing a full EDA cycle on a real dataset is a solid step forward. On the DSA side, recognizing cycle detection in an array context shows real improvement in pattern recognition. Still need to go deeper into extracting meaningful insights from data, not just performing steps. Consistency is strong. Progress is becoming more practical and applied. Patterns recognized: Cycle Detection | Floyd’s Algorithm | Array as Linked List | Pattern Recognition | EDA | End-to-End Analysis #100DaysOfCode #DSA #Python #EDA #Pandas #LeetCode #BuildInPublic #CodingJourney #Consistency
100 Days of Code: Cycle Detection & Titanic EDA Complete
More Relevant Posts
-
Day 22/100 – DSA Journey Problem: Find Mode in Binary Search Tree (BST) Today’s problem focused on understanding how Binary Search Trees (BST) behave and how we can efficiently extract useful insights from them. Understanding the BST A Binary Search Tree follows a structured property: Left subtree → values ≤ root Right subtree → values ≥ root Because of this, when we perform an Inorder Traversal (Left → Root → Right), the values are visited in sorted order. Why Inorder Traversal? Since duplicates appear consecutively in a sorted sequence, inorder traversal allows us to: Track frequency without using extra space Compare current value with previous value Efficiently determine the most frequent element (mode) Approach Used Traverse the BST using inorder traversal Maintain: Previous value Current count Maximum frequency Update result list whenever a new maximum frequency is found Key Learning This problem highlights how leveraging tree properties can help optimize solutions. Instead of using extra space (like hashmaps), we used traversal behavior to achieve an efficient solution. Conclusion Understanding the underlying structure of data (like BST properties) is often more powerful than brute-force approaches. Smart traversal choices can significantly reduce space complexity and improve performance. #Day22 #100DaysOfCode #DSA #BinarySearchTree #Python #CodingJourney #LeetCode #ProblemSolving
To view or add a comment, sign in
-
-
Vector databases are great, but they aren't always the right tool for complex document intelligence. 🧠📉 If you are tired of context fragmentation and untraceable LLM hallucinations, it is time to look at Vectorless RAG with Page Index. By swapping out mathematical embeddings for a reasoning-based, hierarchical document tree, you can achieve upwards of 98% accuracy on complex Q&A tasks with perfect citation traceability. I wrote a complete guide on how this architecture works, including a full Python code implementation. Read it here: https://lnkd.in/gRuXiSxK #ArtificialIntelligence #RAG #PythonDeveloper #MachineLearning #AIEngineering
To view or add a comment, sign in
-
-
A 10 million document RAG dataset occupies 31 GB of RAM at float32. turbovec fits it in just 4 GB - and now it searches it faster than FAISS. I just shipped a new release of turbovec: a Rust vector index with Python bindings, built on Google Research's TurboQuant algorithm. Data-oblivious 2-4 bit quantization that matches the Shannon lower bound on distortion - zero training and no rebuilds when the corpus grows. What's in the box: → Hand-written SIMD kernels - 12–20% faster than FAISS FastScan on ARM; match-or-beat on x86. → O(1) stable-id delete and save/load. The corpus is live and mutable, not a static snapshot. → Drop-in integrations for LangChain, LlamaIndex, and Haystack. → Published benchmarks (recall, speed, compression) at d=200/1536/3072 — every number reproducible from the repo. If you're building RAG where memory, latency, or privacy matters, give it a spin. GitHub: https://lnkd.in/e5M4dVRk Paper: https://lnkd.in/eHRmpYms #RAG #VectorSearch #OpenSource #Rust #Python #𝗟𝗟𝗠 #𝗢𝗽𝗲𝗻𝗦𝗼𝘂𝗿𝗰𝗲 #𝗚𝗲𝗺𝗺𝗮4
To view or add a comment, sign in
-
-
🚀 Stop iterating through rows like it’s 2010. In a recent pipeline, we were processing 5 million records to calculate a rolling score. Using a standard loop took forever and pegged the CPU at 100%. Before optimisation: for i in range(len(df)): df.at[i, 'score'] = df.at[i, 'val'] * 1.05 if df.at[i, 'flag'] else df.at[i, 'val'] After optimisation: import numpy as np df['score'] = np.where(df['flag'], df['val'] * 1.05, df['val']) Performance gain: 85x faster execution. Vectorisation isn’t just a "nice to have"—it’s the difference between a pipeline that crashes at 2 AM and one that finishes in seconds. By letting NumPy handle the heavy lifting in C, we eliminated the Python overhead entirely. If you're still using `.iterrows()` or manual loops for column transformations, it’s time to refactor. The performance delta on large datasets is simply too massive to ignore. What is the biggest "bottleneck" function you’ve refactored recently that gave you a massive speedup? #DataEngineering #Python #PerformanceTuning #Vectorization #DataScience
To view or add a comment, sign in
-
day-47/100 LeetCode challenge 🔍 Problem Solving in Action | Bit Manipulation Today I worked on an interesting problem: “Number of Even and Odd Bits” 👉 Problem Statement: Given a positive integer n, count how many bits with value 1 appear at: Even indices Odd indices (Indices are counted from right to left, starting at 0) 💡 My Thought Process: First, I understood that this problem is about binary representation. I realized I don’t need to convert the number to a string — I can directly work using bit manipulation. I decided to: Traverse each bit using n & 1 Keep track of the current index Count based on whether the index is even or odd After checking each bit, I right-shifted the number (n >> 1) to move forward. ⚙️ Approach: Initialize two counters: even = 0, odd = 0 Loop until n > 0 Check last bit using n & 1 Increment respective counter based on index Shift right and repeat ✅ Key Learning: This problem strengthened my understanding of: Bitwise operations (&, >>) Index-based logic Efficient problem solving without extra space 📌 Example: Input: n = 50 Binary: 110010 Output: [1, 2] 🚀 Always exciting to see how simple bit operations can solve problems efficiently! #ProblemSolving #Python #DataStructures #Algorithms #BitManipulation #CodingJourney
To view or add a comment, sign in
-
-
Day 56: Count increasing Subarrays Task: Count the total number of strictly increasing contiguous subarrays of size 2 or more within an array. Solution: Linear Scan + Combinatorics. Instead of generating every possible subarray to check if it's increasing (which is a super inefficient O(N²) process), I solved this in a single O(N) pass using strictly O(1) space! By simply iterating through the array and counting the length (L) of the current increasing sequence, we can use a neat mathematical trick: an increasing sequence of length L inherently contains exactly (L * (L - 1)) / 2 valid contiguous subarrays. Whenever the increasing sequence breaks (or we reach the end of the array), I just plug the length into that formula, add it to the total count, reset the sequence length to 1, and keep going! A perfect blend of array traversal and math! #geekstreak60 #npci #codingchallenge #dailylearning #programmer #python #arrays #combinatorics #algorithms National Payments Corporation Of India (NPCI) GeeksforGeeks
To view or add a comment, sign in
-
-
Efficient grid navigation often requires a clear understanding of how different components or "tiles" connect to one another. A common challenge in pathfinding is determining if a continuous route exists from a starting point to a destination based on specific connection rules for each cell. In this approach, I utilized a transition table to map how an incoming direction (Top, Right, Bottom, Left) translates to an outgoing direction based on the tile type. By defining these movements as a fixed set of rules, the algorithm can traverse the grid in constant space, O(1), excluding the input itself. The logic follows a simple "follow the pipe" strategy: Start from the initial cell and check both possible exit directions. Update the current position based on the permitted transition. If the next cell does not support the incoming direction, the path is invalid. The process continues until the target coordinates are reached or the path breaks. This method ensures high performance by avoiding complex recursion or auxiliary data structures while maintaining strict adherence to the grid's connectivity constraints. #Algorithms #Python #CodingEfficiency #ProblemSolving #DataStructures
To view or add a comment, sign in
-
-
Strong DSA skills are built when you move from basic understanding to advanced patterns. Day 43/100 — Data Structures & Algorithms Journey Today I explored a more advanced Sliding Window problem — Longest Repeating Character Replacement. This problem pushed me to think beyond basic window expansion and introduced the idea of maintaining constraints while optimizing the window. Today’s Focus: Understanding advanced Sliding Window logic Learning how to track frequency within a window Managing window size based on conditions Applying optimization instead of brute force Why this matters? Because real interview problems are not direct — they require combining concepts and thinking deeper. Key Takeaways: Sliding Window can handle complex constraints efficiently Tracking frequency dynamically is powerful Optimization comes from understanding patterns deeply Advanced problems are just extensions of basic concepts This problem helped me level up from basic patterns to advanced problem-solving. Step by step, moving towards mastery #Day43 #DSA #LeetCode #ProblemSolving #SoftwareEngineering #CodingJourney #100DaysOfCode #TechLearning #DeveloperJourney #Programming #Python #InterviewPreparation #CodingSkills #ComputerScience #FutureEngineer #TechCareers #SoftwareDeveloper #LearnInPublic #Consistency
To view or add a comment, sign in
-
-
#𝐃𝐚𝐲𝟏𝟕 – 𝐑𝐨𝐭𝐚𝐭𝐢𝐨𝐧 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧 🚀 Find max value of F(k) = 0×arr_k[0] + 1×arr_k[1] + ... + (n-1)×arr_k[n-1] where arr_k is nums rotated by k. Naive approach: O(n²) → too slow for n = 1e5 Optimized approach: Find recurrence relation! F(k) = F(k-1) + S - n × nums[n-k] where S = sum(nums) Step 1: Compute F(0) directly Step 2: Use recurrence for F(1) to F(n-1) Step 3: Track maximum Time: O(n) | Space: O(1) Math is the ultimate optimization tool! 🎯 #LeetCode #Python #100DaysOfCode #Math #Optimization
To view or add a comment, sign in
-
-
Day 24 of 100 Completed Today reinforced cycle detection patterns and continued working with real-world data through EDA. • #141 - Linked List Cycle (Easy) - solved • Continued EDA on dataset 🔎 Focus Areas • Fast-slow pointer technique for cycle detection • Recognizing repeated patterns across different problem types • Going deeper into data understanding and cleaning 💡 Key Takeaways (DSA) 📌 #141 Linked List Cycle This is a classic application of Floyd’s Cycle Detection: use slow and fast pointers if they meet → cycle exists no extra space needed, efficient and elegant Key insight: cycle detection isn’t limited to numbers - it applies to linked structures as well. 🚀 Python + EDA Continued working on EDA and exploring the dataset further. 💡 Key Takeaways (Python) • Better understanding of missing values and distributions • More confidence in using Pandas for exploration • Visualization is helping uncover patterns in data ⚡ Honest Reflection This was a steady day. Not very difficult, but important for reinforcing patterns. Cycle detection is now clearly a recurring concept across problems, which makes it easier to recognize. EDA still needs depth, especially in drawing meaningful insights instead of just running operations. Consistency is holding. Progress is gradual but real. Patterns recognized: Fast-Slow Pointers | Cycle Detection | Linked Lists | Data Cleaning | EDA | Pattern Recognition #100DaysOfCode #DSA #Python #EDA #LinkedList #LeetCode #BuildInPublic #CodingJourney #Consistency
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development