😅 I once changed a value in a “copied” object…... and somehow the original data changed too 💥 👉 That’s when I realized… I didn’t understand shallow vs deep copy properly. 🚀 Let’s break it down (this will save you from real bugs) 🧠 Why this matters In JavaScript, objects & arrays are reference types So copying them incorrectly = you might accidentally modify the original data 😬 📦 1. Shallow Copy A shallow copy only copies top-level values 👉 Nested objects are still shared (same reference) So: - Changing top-level → ✅ safe - Changing nested → 💥 affects original too ⚠️ The common mistake You think you created a new object… but deep inside, it’s still pointing to the same memory 😵 🔁 How to create shallow copy • Spread → {...obj} • Object.assign • Array methods → slice, Array.from 🔐 2. Deep Copy A deep copy creates a fully independent clone 👉 Every level is copied 👉 No shared references So: - Changing nested data → ✅ completely safe 🔁 How to create deep copy 👉 structuredClone() (Recommended) - Handles most data types - Modern & reliable 👉 JSON.parse(JSON.stringify()) - Quick but limited - loses functions, Dates, undefined 💡 Real Dev Insight Shallow copy is fast ⚡ Deep copy is safe 🛡️ 👉 Use shallow → for simple data 👉 Use deep → for nested structures 🚀 Final Thought: Most bugs don’t come from logic… 👉 They come from unexpected mutations Understand copying → write safer code 💪 #JavaScript #FrontendDevelopment #WebDevelopment #CodingTips #ShallowCopy #DeepCopy #LearnJavaScript #BuildInPublic #100DaysOfCode #LearnInPublic
Shallow vs Deep Copy in JavaScript
More Relevant Posts
-
The Try-Catch Time Trap: Why Async Errors Escape? Lets look at this code that reads and parse the invalid JSON file. Sync code try { const data = fs.readFileSync("invalid.json", "utf-8"); const jsonData = JSON.parse(data); } catch (err) { console.log("error caught:", err.message); // catches parsing error } All good. --- Async code try { fs.readFile("invalid.json", "utf-8", (err, data) => { const jsonData = JSON.parse(data); }); } catch (err) { console.log("error caught:", err.message); // does NOT even run } Catch block won't execute. Now the question is… 👉 Why? --- Here’s how I started thinking about it: If JS finds an error → it stops execution → looks for a catch block in the current call stack → if not found, it bubbles up the stack --- In sync code: 👉 Everything runs in one continuous stack → error happens → catch block is right there → so it works --- But async changes things. When this line runs: fs.readFile(..., callback) 👉 JS does NOT execute the callback immediately Instead: → it registers the callback → pushes it to the event loop → and moves on --- Now important part 👇 👉 The current call stack finishes execution → which means try-catch is gone --- Later… 👉 when file reading is done → event loop pushes the callback to the call stack → callback runs But now: 👉 this is a new call stack And the old try-catch? 👉 already gone. --- So when error happens inside callback: ❌ there is no catch block anymore --- That’s when it clicked for me: 👉 try-catch works only within the same execution stack Not across time. Not across async boundaries. #JavaScript #Node #Programming #ErrorHandling #Interview #Eventloop #Callstack
To view or add a comment, sign in
-
Claude doesn't see your web page. I read all 512,000 lines of the Claude Code source that leaked yesterday. Most of the coverage focused on the Tamagotchi pet and the undercover mode. I was looking for something else: how does Claude actually handle citations? The biggest thing I found: when Claude fetches your URL, it doesn't pass your content to the main model. It sends it through Haiku (Anthropic's small model) first, which summarizes it. Only the summary reaches Claude. Your page has to survive lossy compression by a less capable model before it can influence an answer. Except for ~96 domains — all developer docs — that bypass the summarization layer entirely and get their full content passed through raw. react.dev, docs.python.org, kubernetes.io... there's a hardcoded whitelist in the source code. There's also a silent domain blocklist, a hardcoded 8 search cap per query, a 125 character quote limit, and a mandatory citation instruction appended to every single search result. I wrote up all 9 findings with the actual code excerpts and file paths. https://lnkd.in/e3q9ZQv6?
To view or add a comment, sign in
-
Claude doesn't see your web page. We read all 512,000 lines of the Claude Code source that leaked yesterday. Most of the coverage focused on the Tamagotchi pet and the undercover mode. I was looking for something else: how does Claude actually handle citations? The biggest thing we found: when Claude fetches your URL, it doesn't pass your content to the main model. It sends it through Haiku (Anthropic's small model) first, which summarizes it. Only the summary reaches Claude. Your page has to survive lossy compression by a less capable model before it can influence an answer. Except for ~96 domains — all developer docs — that bypass the summarization layer entirely and get their full content passed through raw. react.dev, docs.python.org, kubernetes.io... there's a hardcoded whitelist in the source code. There's also a silent domain blocklist, a hardcoded 8 search cap per query, a 125 character quote limit, and a mandatory citation instruction appended to every single search result. we wrote up all 9 findings with the actual code excerpts and file paths. https://lnkd.in/e9KfXhX8
To view or add a comment, sign in
-
Every JS developer writes data every day. But most can't explain Primitive vs Non-Primitive clearly. 🧵 Here's the complete breakdown — with examples 👇 🔵 PRIMITIVE DATA TYPES : 🔹 Stored by VALUE · Immutable · Lives in the Stack 1. String → 'hello' 2. Number → 42, 3.14 3. Boolean → true / false 4. Undefined → let x; (no value assigned) 5. Null → let x = null 6. BigInt → 9007199254740991n 7. Symbol → Symbol('id') → always unique 🟡 NON-PRIMITIVE DATA TYPES : 🔹 Stored by REFERENCE · Mutable · Lives in the Heap 1. Object → { name: 'Alice', age: 25 } 2. Array → [1, 2, 3, 'hello'] 3. Function → function greet() { } 📊 QUICK COMPARISON 🔹 Primitive 🔹 Non-Primitive ✓ Stack ✓ Heap ✓ By value ✓ By reference ✓ Immutable ✓ Mutable ✓ Faster access ✓ Holds complex data 💡 Interview tip: When asked about bugs with objects/arrays — 9 out of 10 times it's a reference issue. Use spread operator or structuredClone() to avoid mutating the original. ✅ Save this. You'll need it. ♻️ Follow for more JS tips 👇 #JavaScript #WebDevelopment #JS #Frontend #Programming #CodingInterview
To view or add a comment, sign in
-
Ever changed a variable in JavaScript only to realize you accidentally broke the original data too? 🤦♂️ That’s the classic Shallow vs. Deep Copy trap. Here is the "too long; didn't read" version: 1. Shallow Copy (The Surface Level) When you use the spread operator [...arr] or {...obj}, you’re only copying the top layer. The catch: If there are objects or arrays inside that object, they are still linked to the original. Use it for: Simple, flat data. 2. Deep Copy (The Full Clone) This creates a 100% independent copy of everything, no matter how deep the nesting goes. The easy way: const copy = structuredClone(original); The old way: JSON.parse(JSON.stringify(obj)); (Works, but it’s buggy with dates and functions). The Rule of Thumb: If your object has "layers" (objects inside objects), go with a Deep Copy. If it’s just a basic list or object, a Shallow Copy is faster and cleaner. Keep your data immutable and your hair un-pulled. ✌️ #Javascript #WebDev #Coding #ProgrammingTips
To view or add a comment, sign in
-
-
🔍 Just shipped a full-stack data extraction pipeline — here's what I learned. A client needed contact data for 400+ veterinarians across Vermont. Simple enough, right? Wrong. Here's what I was up against: 🧱 Target 1: vtvets.org — Angular SPA running on MemberClicks CMS. Static requests returned nothing but a loading shell. The real data? Hidden behind an internal service-router endpoint that can't be reached outside a live browser session. 🧱 Target 2: Google Maps — infinite scroll, 44 city queries, email extraction from 180+ individual clinic websites. After 3 iterations of failed approaches (requests → JS inject → cross-origin CORS block), the breakthrough: 💡 Instead of calling the API directly, let the Angular app call it — and intercept the response. Using Playwright's page.on("response") handler, I captured all 236 records across 24 pages without a single CORS error. The browser made the requests. I just listened. Final pipeline: → VVMA Directory: 236 records via API intercept → Google Maps: ~300 records via Playwright + stealth → Smart merge with phone-based dedup: 293 unique records → Output: Excel + CSV, clean and formatted Key stats: 📞 91% records with phone 📧 34% records with email (public directories rarely expose this) 🌐 61% records with website ⏱️ Total runtime: ~3 hours The biggest lesson? Every scraping problem is unique. The stack that works is the one you discover after understanding WHY the obvious approach fails. Full pipeline on GitHub 👇 [https://lnkd.in/gGhSyqjb] #WebScraping #DataEngineering #Python #Playwright #LeadGeneration #APIReverseEngineering
To view or add a comment, sign in
-
Stop Googling "JSON Formatter" and hoping they aren't logging your data. Most online dev tools are bloated, ad-ridden, or worst of all, send your sensitive inputs to a backend server. I got tired of it, so I built DevLoft: a collection of 19 essential utilities built purely with Vanilla JS. No React. No Webpack. No Node modules. Just index.html, style.css, and a bunch of scripts. Why did I build it this way? - Zero Latency: It loads faster than a framework can even initialize. - True Privacy: Since there’s no backend, it is physically impossible for your data to leave your machine. - Low Barrier to Entry: Want to add a tool? You don't need to learn a framework. Just write a function and open a PR. The Toolkit includes: Data Science: Z-Score outliers, Haversine distance, and Log Parsers. Security: PII Redaction and XSS Sanitizers. AI/LLM: Recursive text chunkers (RAG prep) and Token cost estimators. Classic Dev: Regex testers, SQL Schema generators, and Text diffs. This is an open-source "sandbox" for all of us. If you’ve ever written a quick script to solve a repetitive task, don't let it die in your Gists,add it to DevLoft and let the community use it. Explore the tools (link in comments) and feel free to contribute on GitHub I’m looking for contributors to help optimize the CSS and add more niche data-science utilities. What’s the one script you’re currently running locally that should be a UI tool? #OpenSource #VanillaJS #BuildInPublic #DataScience #WebDev #SoftwareEngineering
To view or add a comment, sign in
-
-
“set (𝙧𝙚𝙫𝙖𝙡𝙞𝙙𝙖𝙩𝙚: 𝟲𝟬) and move on” This works in Next.js — until it doesn’t. Now you’re: • serving stale data for up to 60s • re-fetching even when nothing changed • adding load for no reason This is where caching stops being an optimization — and becomes about 𝘄𝗵𝗼 𝗼𝘄𝗻𝘀 𝗳𝗿𝗲𝘀𝗵𝗻𝗲𝘀𝘀. At a high level: 𝗦𝘁𝗮𝘁𝗶𝗰 𝗿𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴 → speed 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗿𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴 → freshness 𝗖𝗮𝗰𝗵𝗶𝗻𝗴 + 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 → where we balance the two But this abstraction starts to break down at scale. 👉 𝗧𝗶𝗺𝗲-𝗯𝗮𝘀𝗲𝗱 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 (𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲) periodically refetches data This works well when data becomes stale in predictable intervals Think dashboards, blogs, or analytics snapshots 👉 𝗢𝗻-𝗱𝗲𝗺𝗮𝗻𝗱 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 (𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲𝗧𝗮𝗴, 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲𝗣𝗮𝘁𝗵) flips the model Don't blindly revalidate — react to change In one of our systems, moving from time-based to event-driven invalidation: • reduced redundant fetches significantly • cache behavior became predictable under load This becomes the default once writes are frequent. 👉 𝗙𝘂𝗹𝗹 𝗥𝗼𝘂𝘁𝗲 𝗖𝗮𝗰𝗵𝗲 𝘃𝘀 𝗗𝗮𝘁𝗮 𝗖𝗮𝗰𝗵𝗲 • Full Route Cache → caches the rendered output • Data Cache → caches the underlying fetch calls That separation is powerful: Don't rebuild the entire page — refresh just the data 🧠 𝗠𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆 𝗦𝘁𝗼𝗽 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗶𝗻 𝘁𝗶𝗺𝗲 → 𝘀𝘁𝗮𝗿𝘁 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗶𝗻 𝗲𝘃𝗲𝗻𝘁𝘀 Instead of → “𝘳𝘦𝘷𝘢𝘭𝘪𝘥𝘢𝘵𝘦 𝘦𝘷𝘦𝘳𝘺 𝘟 𝘴𝘦𝘤𝘰𝘯𝘥𝘴” Think → “𝘸𝘩𝘢𝘵 𝘦𝘷𝘦𝘯𝘵 𝘴𝘩𝘰𝘶𝘭𝘥 𝘮𝘢𝘬𝘦 𝘵𝘩𝘪𝘴 𝘥𝘢𝘵𝘢 𝘴𝘵𝘢𝘭𝘦?” ❓Interested to hear how this plays out in write-heavy or multi-region setups. #NextJS #Caching #ReactJS #WebDevelopment #FullStack #JavaScript #SoftwareEngineering #SystemDesign #FrontendDevelopment
To view or add a comment, sign in
-
-
🚀 Most developers ignore this… until their database slows down to a crawl. Indexing isn’t just an optimization — it’s the difference between milliseconds and seconds 👇 Indexing minimizes disk access and helps you locate data faster using a structured lookup system. Primary Index ≠ Secondary Index Primary Index → Works on sorted data, defines how records are physically stored Secondary Index → Works on unsorted data, provides an additional fast lookup path When building real systems, you don’t just rely on storing data efficiently — you rely on indexing to handle performance at scale, especially for heavy SELECT queries and filtering ⚡ Think about it: Without indexing → full table scan (slow 🐢) With indexing → direct access using pointers (fast ⚡) Dense Index ≠ Sparse Index Dense Index → Entry for every record (fast lookup, more space) Sparse Index → Entry for some records (less space, slightly slower lookup) This small distinction changes how you design systems — because every index you add improves read performance but impacts writes (INSERT, UPDATE, DELETE). Good engineers don’t just add indexes blindly. They balance read vs write trade-offs based on real use cases. Building systems > memorizing concepts. What’s one concept developers often misunderstand? #fullstackdeveloper #softwareengineering #webdevelopment #javascript #reactjs #backend #buildinpublic #nodejs #nextjs #typescript
To view or add a comment, sign in
-
-
𝗠𝗮𝘀𝘁𝗲𝗿𝗶𝗻𝗴 𝗩𝗮𝗿𝗶𝗮𝗯𝗹𝗲𝘀 𝗮𝗻𝗱 𝗗𝗮𝘁𝗮 𝗧𝘆𝗽𝗲𝘀 𝗶𝗻 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 Variables and data types are key to storing and manipulating information in JavaScript. You need to understand how to use them. You use variables to store data. In JavaScript, you declare variables with let, const, or var. For example: let age = 30 const name = 'Alice' var isAdmin = false JavaScript has six main data types: - string - number - boolean - null - undefined - symbol Each type stores simple data. JavaScript also has complex data types like objects and arrays. Objects store key-value pairs. Arrays store ordered lists. For example: const person = { name: 'Bob', age: 25 } const numbers = [1, 2, 3, 4, 5] JavaScript is dynamically typed. Variables can change types during runtime. Type coercion converts one data type to another. For example: const result = 10 + '5' // Result: '105' To write good code, follow best practices. Use const for values that do not change. Prefer let over var. Be mindful of type coercion. Mastering variables and data types helps you become a better developer. You can write cleaner code by understanding variables and data types. Source: https://lnkd.in/gpqja7bg
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development