🧠 Shallow Copy vs Deep Copy in JavaScript Copying data in JavaScript is not always as simple as it looks — especially when working with objects and arrays. 🔹 Shallow Copy A shallow copy creates a new reference, but nested objects still point to the same memory. 📌 If you change nested data in the copy, it also affects the original object. Common ways to create a shallow copy: Object.assign() Spread operator { ...obj } Array.slice() 🔹 Deep Copy A deep copy creates a completely independent copy — including all nested objects. 📌 Changes in the copied object do NOT affect the original one. Common ways to create a deep copy: structuredClone() JSON.parse(JSON.stringify(obj)) (with limitations) Utility libraries like lodash.cloneDeep() ⚠️ Why it matters State management in React Avoiding unexpected bugs Writing predictable & maintainable code ✨ Tip: Always understand your data structure before copying it. #JavaScript #FrontendDevelopment #ReactJS #ShallowCopy #DeepCopy #WebDevelopment #CodingConcepts #CleanCode
JavaScript Copying: Shallow vs Deep Explained
More Relevant Posts
-
🚀 𝗙𝗶𝗿𝘀𝘁 - 𝗖𝗹𝗮𝘀𝘀 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀 𝗶𝗻 𝗝𝗮𝘃𝗮𝘀𝗰𝗿𝗶𝗽𝘁 In JavaScript, functions are first-class values. That means the language allows functions to be treated the same way as any other value. Practically, this gives us three abilities: • assign a function to a variable • pass a function as an argument • return a function from another function Example:- 1. when assigned as a value const sayHi = function () console.log("Hi"); } => 𝗧𝗵𝗶𝘀 𝗶𝘀 𝘄𝗵𝘆 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻 𝗲𝘅𝗽𝗿𝗲𝘀𝘀𝗶𝗼𝗻𝘀, 𝗮𝗻𝗼𝗻𝘆𝗺𝗼𝘂𝘀 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀, 𝗮𝗻𝗱 𝗮𝗿𝗿𝗼𝘄 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀 𝗮𝗿𝗲 𝗻𝗼𝘁 𝘀𝗽𝗲𝗰𝗶𝗮𝗹 𝗰𝗮𝘀𝗲𝘀 , 𝘁𝗵𝗲𝘆’𝗿𝗲 𝗷𝘂𝘀𝘁 𝘃𝗮𝗹𝘂𝗲𝘀. 2. function passed as an argument function executeFunc (callback) { callback(); } executeFunc(sayHi) => 𝗧𝗵𝗶𝘀 𝗶𝘀 𝘁𝗵𝗲 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗳𝗼𝗿 𝗰𝗮𝗹𝗹𝗯𝗮𝗰𝗸𝘀, 𝗲𝘃𝗲𝗻𝘁 𝗵𝗮𝗻𝗱𝗹𝗲𝗿𝘀, 𝗮𝗻𝗱 𝗮𝘀𝘆𝗻𝗰 𝗰𝗼𝗱𝗲. 𝗧𝗵𝗲 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻 𝗶𝘀 𝗷𝘂𝘀𝘁 𝗯𝗲𝗶𝗻𝗴 𝗽𝗮𝘀𝘀𝗲𝗱 𝗹𝗶𝗸𝗲 𝗱𝗮𝘁𝗮. 3. Returning from another function function outer() { return function inner() { console.log("Inner function executed"); }; } const fn = outer(); fn(); => 𝗧𝗵𝗶𝘀 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿 𝗲𝗻𝗮𝗯𝗹𝗲𝘀 𝗰𝗹𝗼𝘀𝘂𝗿𝗲𝘀 𝗮𝗻𝗱 𝗵𝗶𝗴𝗵𝗲𝗿-𝗼𝗿𝗱𝗲𝗿 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀 (𝗺𝗮𝗽, 𝗳𝗶𝗹𝘁𝗲𝗿, 𝗿𝗲𝗱𝘂𝗰𝗲). Most JavaScript patterns work because functions can be passed around like data. First-class functions are the rule that makes that possible. #Javascript #Functions #WebDevelopment #Frontend
To view or add a comment, sign in
-
-
🤔 Ever wondered why editing an object inside a function affects the original, but editing a number doesn’t? JavaScript makes this feel confusing. 🧠 JavaScript interview question Is JavaScript pass by value or pass by reference? ✅ Short answer • JavaScript always passes arguments by value • For primitives, the value is the actual data • For objects, the value being copied is a reference (a pointer) • Mutating the object changes what both references point to • Reassigning the parameter does not affect the original variable 🔍 A bit more detail • Primitives (string, number, boolean, null, undefined, symbol, bigint) are copied as direct values • Objects (arrays, functions, plain objects, maps, sets) are not copied as a whole • The function receives a copy of the reference, not a deep copy of the object 💻 Example // 1) Primitive: no change outside function inc(n) { n = n + 1; } let a = 5; inc(a); console.log(a); // 5 // 2) Object mutation: change is visible outside function setName(obj) { obj.name = "Alex"; } const user = { name: "Sam" }; setName(user); console.log(user.name); // "Alex" // 3) Reassigning param: does NOT affect outside function replace(obj) { obj = { name: "New" }; } replace(user); console.log(user.name); // "Alex" ⚠️ Common misconception • “Arrays are passed by reference” • Not exactly. The reference is passed by value. 🎯 Takeaway • Primitives: you copy the value • Objects: you copy the pointer to the same object • Want to avoid accidental mutation? Clone before changing #javascript #frontend #webdevelopment #react #typescript #codinginterview #learninpublic #programming
To view or add a comment, sign in
-
📌 Understanding JSON.parse() in JavaScript In JavaScript, data often travels between systems in JSON (JavaScript Object Notation) format — especially when working with APIs. To use that data inside your application, JavaScript provides the JSON.parse() method. 👉 What does JSON.parse() do? 🔹 JSON.parse() converts a JSON string into a JavaScript object. 👉 In simple terms: 🔹 It helps JavaScript understand and work with JSON data. 👉 Example Explanation: 🔹 The API response is a string 🔹 JSON.parse() converts it into an object 🔹 Now we can access properties using dot notation 👉 Why is JSON.parse() important? 🌐 Used when handling API responses 💾 Converts data from localStorage / sessionStorage 🔄 Helps transform string data into usable objects 🚀 Essential for backend–frontend communication 💠 Common Error to Watch Out For 🔹 JSON.parse() only works on valid JSON. 🔹 Invalid JSON will throw an error. 🔹 JSON.parse("{name: 'Hari'}"); // ❌ Error ✔ Correct JSON format: JSON.parse('{"name":"Hari"}'); // ✅ JSON.parse() is a fundamental JavaScript method that plays a key role in real-world applications, especially while working with APIs and stored data. #JavaScript #WebDevelopment #Frontend #JSON #CodingTips #LearnJavaScript
To view or add a comment, sign in
-
-
𝗣𝗿𝗼𝗳𝗶𝗹𝗶𝗻𝗴 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 𝗖𝗼𝗱𝗲 𝗘𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲𝗹𝘆 You need to profile your JavaScript code to improve performance. JavaScript provides tools like Console.time, Console.timeEnd, and Performance.now to help you do this. - Console.time and Console.timeEnd measure how long your code takes to run. - Performance.now provides precise timestamps to measure time spans. Here's how you can use them: ```javascript console.time('task'); // Your code here console.timeEnd('task'); ``` Or with Performance.now: ```javascript const start = performance.now(); // Your code here const end = performance.now(); console.log(`Time taken: ${end - start} milliseconds`); ``` These tools are useful for: - Measuring load times for dynamically loaded libraries - Optimizing animation frame rates - Profiling asynchronous operations When profiling asynchronous code, be aware of potential race conditions. You can create a utility function to simplify profiling: ```javascript const profiler = async (fn) => { const start = performance.now(); await fn(); const end = performance.now(); return end - start; }; ``` Remember to use these tools alongside logging libraries to capture trace logs and identify performance bottlenecks. Source: https://lnkd.in/dXY6x9F9
To view or add a comment, sign in
-
🧠 Deep dive: How fetch() works in JavaScript (and why it replaced XMLHttpRequest) Most of us use fetch() daily, but understanding how it actually works reveals why it was such a big upgrade over XMLHttpRequest. 🔙 The problem with XMLHttpRequest Imperative, state-driven API (readyState, event handlers) Tight coupling between request lifecycle and JS execution Difficult to reason about and scale Poor composability for async flows 🔜 What fetch() does differently fetch() is a Web API that immediately returns a Promise The network request is offloaded to the browser environment JavaScript execution continues without blocking the call stack 🧩 What happens under the hood The returned Promise starts in a pending state .then() / .catch() callbacks are registered internally on the Promise Once the network request completes: The Promise is fulfilled or rejected Its reaction callbacks are queued in the Microtask Queue The Event Loop schedules these microtasks after the call stack is empty, but before macrotasks ⚠️ Subtle but important detail fetch() only rejects on network failures HTTP errors like 404 or 500 still resolve the Promise This forces explicit error handling via response.ok ✨ Why this design matters Predictable async execution Better separation of concerns Easier composition with Promise.all, async/await Cleaner mental model for performance and debugging Understanding async JavaScript at this level completely changes how you reason about APIs, performance, and event loop behavior. Learning internals > memorizing syntax. #JavaScript #AsyncJavaScript #FetchAPI #EventLoop #Promises #WebPerformance #FrontendEngineering #LearningInPublic
To view or add a comment, sign in
-
-
🤔 Why Arrays and Objects Behave Differently in JavaScript In JavaScript, arrays and objects may look similar, but they’re designed for very different purposes. Understanding this clears up many confusing bugs. 🔹 Core Difference Arrays → Ordered collections (indexed by numbers) Objects → Unordered collections (key–value pairs) 🔹 Example const arr = ["React", "Angular", "Vue"]; const obj = { framework1: "React", framework2: "Angular", framework3: "Vue" }; 🔹 Accessing Data arr[0]; // "React" obj.framework1; // "React" Why different syntax? ➡️ Arrays use numeric indexes ➡️ Objects use named keys 🔹 Built-in Behavior arr.length; // 3 obj.length; // undefined Arrays come with helpers like map, filter, reduce Objects focus on structured data, not iteration logic 🔹 Type Check (JS gotcha 😅) typeof arr; // "object" typeof obj; // "object" Even though arrays are technically objects, JavaScript treats them specially under the hood. 🔹 When to Use What? ✅ Use arrays when: Order matters You need iteration or transformations ✅ Use objects when: Data has clear labels You need fast access by key 💡 Takeaway Arrays are for lists Objects are for descriptions Mastering this distinction makes your JS code cleaner, faster, and more predictable. What confused you most about arrays vs objects when you started? 👇 #JavaScript #FrontendDevelopment #WebDev #LearningJS
To view or add a comment, sign in
-
𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 𝗖𝗼𝗲𝗿𝗰𝗶𝗼𝗻: 𝗖𝗵𝗮𝗼𝘀 𝗼𝗿 𝗟𝗼𝗴𝗶𝗰? 🤯 Hi everyone! I just dropped Part 4 of my JavaScript deep-dive series: 𝗧𝗵𝗲 𝗦𝗲𝗰𝗿𝗲𝘁𝘀 𝗼𝗳 𝗖𝗼𝗲𝗿𝗰𝗶𝗼𝗻. Coercion is easily the most misunderstood part of JavaScript. We’ve all seen the memes about [] + {} or 0 == false, but how many of us actually know the 𝘢𝘭𝘨𝘰𝘳𝘪𝘵𝘩𝘮𝘴 the engine uses to get those results? In this article, I break down the "Abstract Operations" that govern everything: • 𝗧𝗼𝗣𝗿𝗶𝗺𝗶𝘁𝗶𝘃𝗲: How objects are forced into simple values. • 𝗧𝗼𝗡𝘂𝗺𝗯𝗲𝗿 & 𝗧𝗼𝗦𝘁𝗿𝗶𝗻𝗴: The lookup tables the engine follows. • 𝗧𝗵𝗲 𝗕𝗼𝗼𝗹𝗲𝗮𝗻 𝗧𝗿𝗮𝗽: Why if (val) is different from if (val == true). • 𝗧𝗵𝗲 "𝗦𝘁𝗿𝗶𝗻𝗴 𝗪𝗶𝗻𝘀" 𝗥𝘂𝗹𝗲: Why the + operator behaves differently than all others. If you’ve ever been told to "just use === and never look back," this article is for you. Understanding coercion isn't just about passing interviews, it's about truly mastering the language's internal logic. Read the full breakdown here: https://lnkd.in/en-h5QfY 🔗 𝗡𝗲𝘅𝘁 𝘂𝗽: We take everything we’ve learned about Scope and apply it to the legendary world of 𝗖𝗹𝗼𝘀𝘂𝗿𝗲𝘀. #JavaScript #WebDevelopment #Programming #SoftwareEngineering #CodingTips #TechCommunity #Coercion #JSInternal
To view or add a comment, sign in
-
𝗪𝗵𝘆 𝗱𝗼𝗲𝘀 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 𝗹𝗲𝘁 𝘆𝗼𝘂 𝘂𝘀𝗲 𝘃𝗮𝗿𝗶𝗮𝗯𝗹𝗲𝘀 𝗯𝗲𝗳𝗼𝗿𝗲 𝘁𝗵𝗲𝘆’𝗿𝗲 𝗱𝗲𝗰𝗹𝗮𝗿𝗲𝗱? 🤔 Hi everyone! Following up on my last post, Part 2 of my JavaScript deep-dive series is now live on Medium! Today, we’re tackling one of the most famous (and often misunderstood) behaviors in the language: 𝗛𝗼𝗶𝘀𝘁𝗶𝗻𝗴. If you’ve ever wondered why var gives you undefined, but let throws a ReferenceError, or why function declarations work anywhere in your file, this article is for you. 𝗜𝗻 𝗣𝗮𝗿𝘁 2, 𝗜 𝗯𝗿𝗲𝗮𝗸 𝗱𝗼𝘄𝗻: • 𝗧𝗵𝗲 𝗣𝗿𝗲𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗦𝘁𝗲𝗽: Why the engine scans your code before running it. • 𝗧𝗵𝗲 "𝘃𝗮𝗿" 𝗠𝘆𝘀𝘁𝗲𝗿𝘆: Why declarations are hoisted, but values are not. • 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻 𝘃𝘀. 𝗖𝗹𝗮𝘀𝘀 𝗛𝗼𝗶𝘀𝘁𝗶𝗻𝗴: Why they behave so differently. • 𝗧𝗵𝗲 𝗧𝗲𝗺𝗽𝗼𝗿𝗮𝗹 𝗗𝗲𝗮𝗱 𝗭𝗼𝗻𝗲 (𝗧𝗗𝗭): Demystifying the "forbidden zone" of modern JS. I also address the biggest myth in JS: 𝗗𝗼𝗲𝘀 𝘁𝗵𝗲 𝗲𝗻𝗴𝗶𝗻𝗲 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 "𝗺𝗼𝘃𝗲" 𝘆𝗼𝘂𝗿 𝗰𝗼𝗱𝗲 𝘁𝗼 𝘁𝗵𝗲 𝘁𝗼𝗽 𝗼𝗳 𝘁𝗵𝗲 𝗳𝗶𝗹𝗲? Check out the full breakdown here: https://lnkd.in/dGZJMbB4 🔗 I’d love to hear your thoughts or any "Hoisting horror stories" you've encountered in your own projects! 𝗡𝗲𝘅𝘁 𝘂𝗽: We move from how variables are registered to where they live: 𝗦𝗰𝗼𝗽𝗲 𝗮𝗻𝗱 𝘁𝗵𝗲 𝗦𝗰𝗼𝗽𝗲 𝗖𝗵𝗮𝗶𝗻. #JavaScript #WebDevelopment #Hoisting #SoftwareEngineering #TechCommunity #CodingLife #Medium
To view or add a comment, sign in
-
Rethinking .filter().map() in Performance-Critical JavaScript Code As front-end developers, we often write code like this without thinking twice 👇 data .filter(item => item.isActive) .map(item => item.value) It’s clean. It’s readable. But in performance-sensitive or large-scale applications, it’s not always optimal. Why? 🤔 Because .filter() and .map() each create a new array, meaning: • Multiple iterations over the same data • Extra memory allocations • Unnecessary work in hot paths A better alternative in many cases 👇 data.reduce((acc, item) => { if (item.isActive) acc.push(item.value) return acc }, []) ✅ Single iteration ✅ Less memory overhead ✅ Better control over logic Does this mean you should never use .filter().map()? Of course not. 👉 Readability comes first for small datasets 👉 Optimization matters when dealing with large lists, frequent renders, or performance-critical code Key takeaway 🧠 Write clear code first. Optimize deliberately, not habitually. #JavaScript #ReactJS #FrontendDevelopment #WebPerformance #CleanCode #SoftwareEngineering
To view or add a comment, sign in
-
-
WebAssembly vs. JavaScript: Testing Side-by-Side Performance. How much faster is WebAssembly than JavaScript for heavy data processing? We do a side-by-side test using an image processor built with Rust.
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development