💡 A small algorithmic discovery while revisiting sorting algorithms While revisiting sorting concepts like Bubble Sort and Insertion Sort, I tried implementing my own approach to sort an array. Interestingly, the logic that came to my mind turned out to be very similar to the idea behind Insertion Sort. The approach was simple: • Start from the second element of the array • Compare the current element with previous elements • Shift larger elements to the right • Insert the current element in its correct position Here is the JavaScript snippet I came up with: const getSortedArr = (arr) => { const n = arr.length; for (let i = 1; i < n; i++) { let curr = arr[i]; let prev = i - 1; while (arr[prev] > curr && prev >= 0) { arr[prev + 1] = arr[prev]; arr[prev] = curr; prev--; } } return arr; }; 📊 Complexity Analysis • Time Complexity - Best Case: O(n) - Average Case: O(n²) - Worst Case: O(n²) • Space Complexity - O(1) (in-place sorting) #JavaScript #Algorithms #Sorting #Learning #SoftwareEngineering #ProblemSolving
Implementing Custom Sorting Algorithm in JavaScript
More Relevant Posts
-
Cracking the Code: Rotated Binary Search 🔍 Left sorted hai ya Right? Bas isi ek sawal mein pura logic chhupa hai! 💡 // search in Roated Sorted Array accensing Order : Distinct value : [3, 4, 5, 6, 7, 0, 1, 2] // Accesding Order : [0, 1, 2, 3, 4, 5, 6] Binary let arr = [4,5,6,7,0,1,2] let target = 0; function roatedArray(arr, target){ let start = 0; let end = arr.length - 1; let indx = -1 while(start <= end){ let mid = Math.floor((start + end)/ 2); if(arr[mid] === target){ indx = mid; return indx } if(arr[start] <= arr[mid]){ if(arr[start] <= target && target <= arr[mid]){ end = mid - 1 } else { start = mid + 1 } } else { if(arr[mid] <= target && target <= arr[end]){ start = mid + 1 } else { end = mid - 1 } } } return indx } console.log(roatedArray(arr, target)); #DataStructures #Algorithms #JavaScript #CodingLife #BinarySearch #ProblemSolving #LinkedInLearning
To view or add a comment, sign in
-
-
Anthropic’s claude code internal code got leaked as one 59mb javascript source map file meant for debugging that accidentally shipped on npm i dug into it to understand how its memory system works and its very different from what most people including me assumed memory is not storage it is an index the system loads a small memory.md file but it only contains pointers. the actual knowledge lives outside and is fetched only when needed this cuts most context bloat then the three layer design:- > layer one is the index always in context > layer two is topic files loaded on demand > layer three is raw transcripts never fully read only searched this is bandwidth aware design context engineering > memory management writing to memory is strict and nothing gets dumped blindly content goes to files first then the index is updated and this keeps the system clean and prevents drift Wild part is "autodream" a background process that rewrites memory it merges duplicates removes contradictions and turns vague notes into precise statements (memory is edited not accumulated) they also treat memory as untrusted. every recall is a hint not truth. the model verifies before using it real insight -> they avoid storing anything that can be derived again other memory systems are build like a dump yard this system cleans and corrects itself #Anthropic #claude #claudecode #memory #ai
To view or add a comment, sign in
-
-
Recently, I was building an image extraction tool and ran into a challenge that many of us face with modern websites The Problem Today’s websites rely heavily on JavaScript, so a lot of content loads dynamically. Because of that, the usual scraping methods (like simple HTTP requests + HTML parsing) often miss the actual data. What I Did To handle this, I started using Selenium to simulate a real browser. This way, the page loads just like it would for a user, and I could access the actual content. But that was only part of the solution. Once I had the data, there was a lot of noise icons, placeholders, UI elements things I didn’t really need. So I improved the filtering logic and focused on specific URL patterns to extract only useful, high-quality images. The Result • Cleaner and more relevant image data • Better handling of dynamic content • A more reliable extraction process Would love to hear from you: How do you handle scraping from dynamic websites or dealing with protected media? #WebScraping #Automation #Python #DataEngineering
To view or add a comment, sign in
-
Turning a confusing problem into an optimized solution 🚀 Today, I worked on a string problem: Longest Repeating Character Replacement Example: "AABABBA", k = 1 Output = 4 (AAAA, BBBB) Instead of directly jumping to the optimized solution, I explored multiple approaches: 🔹 Brute Force Approach Try all possible substrings Count frequency of characters Check if valid using: length - maxFreq <= k Time Complexity: O(n³) 🔹 Optimized Approach (Sliding Window) Use two pointers (left & right) Expand the window by adding characters Shrink the window when it becomes invalid Reuse previous computations instead of restarting Time Complexity: O(n) Here’s my implementation in JavaScript: function longestSubstringSame(s, k) { let map = {}; let left = 0; let maxFreq = 0; let maxLen = 0; for (let right = 0; right < s.length; right++) { map[s[right]] = (map[s[right]] || 0) + 1; maxFreq = Math.max(maxFreq, map[s[right]]); if ((right - left + 1) - maxFreq > k) { map[s[left]]--; left++; } maxLen = Math.max(maxLen, right - left + 1); } return maxLen; } console.log(longestSubstringSame("AABABBA", 1)); 💡 Key Takeaways: Don’t recompute everything from scratch Sliding Window helps optimize repeated work Understanding the logic behind conditions is crucial Currently improving my skills in Data Structures & Algorithms and building a strong problem-solving mindset. Open to feedback and suggestions! #DSA #JavaScript #ProblemSolving #SoftwareDevelopment #LearningInPublic #AccioJob
To view or add a comment, sign in
-
Every developer who has built a web scraper knows this pain: Your scraper works perfectly. Then the website moves one <div>. And everything breaks. Welcome to the endless loop of fixing selectors. A new Python tool called Scrapling is getting a lot of attention for exactly this reason. Instead of relying only on CSS selectors… it “remembers” the element. Scrapling stores a fingerprint of the element its tag, attributes, neighbors, and structure. So when a website layout changes… it can relocate the element automatically. Meaning your scraper doesn’t instantly explode every time a site sneezes. It also packs some surprisingly powerful features: – Stealth fetchers to avoid bot detection – Built-in proxy rotation – Async spiders for large crawls – Browser fetchers when JavaScript rendering is needed Basically: BeautifulSoup simplicity Scrapy style crawling Playwright level dynamic fetching All in one library. That’s why developers are suddenly paying attention. Because most scraping projects don’t fail from scale… They fail from maintenance. The real cost of scraping isn’t writing the scraper. It’s fixing it every time the page changes. Tools like this shift scraping from: “babysitting fragile scripts” to “running resilient data pipelines.” Curious how many data teams are still maintaining broken scrapers every week. How do you handle scraper maintenance today? #Python #WebScraping #DataEngineering #AI #OpenSource
To view or add a comment, sign in
-
-
Back to core DSA algorithms — building from the ground up. 🚀 Today I implemented: 🔹 Selection Sort Algorithm Problem: Sort an array by repeatedly selecting the minimum element and placing it at the correct position. Not the fastest algorithm (O(n²)), but that’s not the point right now. The goal is to understand: ✔ How sorting works internally ✔ How loops and comparisons build logic ✔ Why better algorithms exist Skipping basics is the biggest mistake. I’m not doing that. #DSA #JavaScript #Sorting #Algorithms #CodingJourney #Consistency
To view or add a comment, sign in
-
-
Exploring Recursion with a Maze Problem (JavaScript) I’ve been practicing recursion and recently worked on a classic problem — finding all paths to reach the bottom-right corner of a maze. Problem Setup We have a 2D grid (maze) where: true → path is allowed false → blocked cell let maze = [ [true, true, true], [true, true, false], [true, true, true] ]; Goal Find all possible paths from top-left (0,0) to bottom-right (n-1, m-1) using: D → Move Down R → Move Right 💡 Approach (Recursion) Start from (0,0) At each step: If the cell is blocked → stop If destination is reached → print the path Otherwise: Move Down (D) Move Right (R) const pathtoReachCorner = (maze, processed, r, c) => { if (r === maze.length - 1 && c === maze[0].length - 1) { console.log(processed); return; } if (!maze[r][c]) return; if (r < maze.length - 1) { pathtoReachCorner(maze, processed + "D", r + 1, c); } if (c < maze[0].length - 1) { pathtoReachCorner(maze, processed + "R", r, c + 1); } }; Key Learnings Recursion helps break complex path problems into smaller decisions Base conditions are crucial to avoid infinite calls 📈 Output Example DDRR DRDR RDDR Thanks to Kunal Kushwaha Enjoying this journey
To view or add a comment, sign in
-
Excited to share my latest project — RecurseViz! A web-based recursion and backtracking visualizer that helps CS students understand algorithms step by step through interactive visualization. The Problem I solved: Most students struggle with recursion not because they don't understand the concept, but because they can't SEE what's happening inside the computer. RecurseViz makes the invisible visible. ✨ Key Features: → 18+ built-in algorithms with perfect visualization → Paste ANY C++ code and watch it execute step by step → Grid view for N-Queens ♛ and Rat in Maze 🐀 → AI-powered code debugger that finds bugs with fixes → Time & Space complexity charts for every algorithm → Line-by-line code highlighting showing exact execution → Call stack, local variables shown at every step Tech Stack: → Frontend: React + Vite + SVG → Backend: Node.js (Vercel Serverless Functions) → AI: Groq API (LLaMA 3.3 70B model) → Deployment: Vercel → Version Control: GitHub 🔗 Live site: https://lnkd.in/gJZef_g3 💻 GitHub: https://lnkd.in/gUF_iCjY What makes this unique: You paste your recursive function and instantly see the complete execution tree. Designed and developed this as part of my second-year C++ lab project in collaboration with my teammates Mohammad Ali Zia, Kaustubh Chaturvedi and Ausaaf Ahmad Would love your feedback! 🙌 #WebDevelopment #React #JavaScript #CPlusPlus #Algorithms #DataStructures #RecurseViz #MachineLearning #AI #OpenSource #StudentProject #FullStack #NodeJS #Vercel #Groq
To view or add a comment, sign in
-
I just spent months debugging my own AI-generated code. The result? A JavaScript Visualizer that finally shows what's actually happening behind the scenes. Quick test — without running this, what prints first? Promise.resolve().then(() => console.log(1)); setTimeout(() => console.log(2), 0); queueMicrotask(() => console.log(3)); console.log(4); Most beginners memorize the answer. Almost nobody understands why. The "why" lives in the call stack, the microtask queue, the macrotask queue, and the event loop — concepts that are practically invisible in a normal debugger. So I built a tool that makes them visible. ▶️ Watch the call stack push and pop, line by line ▶️ See setTimeout park in Web APIs, then graduate to the task queue ▶️ Watch promise callbacks line up as microtasks ▶️ See the event loop pick the next job in real time ▶️ Live memory graph — stack frames on the left, heap objects on the right ▶️ Every step annotated with the actual ECMAScript spec reference Built almost entirely with AI. It took months. The hard part wasn't the UI — it was making the simulated runtime match the real one. I had to fix async/await microtask ordering, super.method() resolution in class hierarchies, TDZ when parameters shadow outer consts, try/finally semantics on early return, await rejection inside try/catch — and 40+ other edge cases. After 48 differential tests against real Node.js, the engine finally produces identical output. I have a planning to deploy it soon. If you teach JavaScript — or you're still learning it — this is going to save you (or your students) weeks of confusion. #JavaScript #LearnJavaScript #WebDevelopment #BuildInPublic #AI #EventLoop #AsyncJavaScript #FrontendDev #100DaysOfCode #CodingJourney #SoftwareEngineering #Programming #DeveloperCommunity #JSVisualizer #TechTwitter
To view or add a comment, sign in
-
Day 07: Cracking the "Non-Divisible Subset" Logic 🧩 Today was a true test of algorithmic thinking. I tackled a problem that looks like a standard array search but is actually a brilliant exercise in Number Theory and Remainder Math. The Challenge: Given a set of numbers, find the maximum size of a subset where the sum of any two numbers is not divisible by K The Strategy (Remainder Frequency): Instead of checking every possible pair (which would be very slow), I focused on remainders . If two numbers sum to a multiple of K, their remainders (r1+r2) must sum to K. const s = [19,10,12,10,24,25,22]; const k = 4; function nonDivisibleSubset(k, s) { let freq = new Array(k).fill(0); // count remainders for (let num of s) { freq[num % k]++; } let count = 0; // remainder 0 case if (freq[0] > 0) count++; // check pairs for (let i = 1; i <= Math.floor(k / 2); i++) { if (i === k - i) { // special case when k is even if (freq[i] > 0) count++; } else { count += Math.max(freq[i], freq[k - i]); } } return count; } console.log(nonDivisibleSubset(k,s)) Key Takeaway: When a problem involves divisibility, don't look at the numbers—look at the remainders. It turns a complex pairing problem into a simple counting one! One full week of coding done. The momentum is real! 🚀 #JavaScript #Algorithms #NumberTheory #100DaysOfCode #CodingChallenge #ProblemSolving #SoftwareEngineering
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development