Day 19 of me reading random and basic but important coding facts...... Today I learnt about a very underrated topic: WeakMap and WeakSet. We all use Map everywhere, but few know that Map has a problem. In a Map, keys are strongly held. This means if we use an object as a key like map.set(obj, "data")......that object now exists in memory. Even if we set obj = null everywhere else in the code, the object still exists in memory as long as the Map exists. The garbage collector cannot touch it because the Map is still holding it. This leads to memory leaks. The solution is simple: WeakMap & WeakSet. These structures hold weak references to their keys (objects only). If the key object is deleted or becomes unreachable elsewhere in your code, WeakMap automatically releases it. The entry is removed from the map, and the memory gets freed. We use WeakMap for: 1. Caching/Memoization: To store the result of a heavy calculation on an object. e.g., cache.set(obj, result) If obj is later deleted by the app, the cached result is automatically wiped from memory. 2. DOM Node Tracking: Associating data with a DOM element. When the element is removed from the DOM, the data vanishes. It is a very, very important use case. Vue.js uses it to track reactivity without leaks, and Angular uses it to link metadata to components. It is the industry standard for associating data with objects you don't own. But if WeakMap is so good, why do we use Map? Why not always use WeakMap? The answer is: in Map, keys can be primitives as well as objects, but WeakMap can only have objects as keys. Also, we can iterate over Map, but we can't in WeakMap. So, the rule of thumb is: Use Map when you need a resilient data store that you need to count or iterate over. Use WeakMap when you are adding secondary data to objects that have a lifecycle managed by something else. Keep Learning!!!!!!! #JavaScript #WebDevelopment #Coding #MemoryManagement #SoftwareEngineering #FrontendDev
Understanding WeakMap and WeakSet in JavaScript
More Relevant Posts
-
Day 25 of me reading random and basic but important coding facts.... Today I read about new Function syntax. We usually define functions with function() {} or () => {}, but I didn't know that we can create them from strings tooo.. The syntax let func = new Function(arg1, arg2, bodyString) allows us to turn any string of code into a callable function at runtime. It's popular use cases are ..... 1. Dynamic Code Execution: When you receive executable code from a server (e.g., a complex formula or logic block) as a string. 2. Templating Engines: Many complex web frameworks use it internally to compile templates into functions dynamically. Interesting part is it's internal implementation....... Normally, a function remembers its birthplace via the special [[Environment]] property, which links to the Lexical Environment where it was created. This allows standard functions to access outer variables (Closures). However, functions created with new Function have their [[Environment]] reference set to the Global Lexical Environment, not the local one. so, it cannot access outer local variables. It can only see global variables and its own arguments. But this is actually a feature to prevent conflicts with minifiers. Since minifiers rename local variables (e.g., let userName becomes let a), a function created from a string wouldn't know the new variable names. By forcing it to use the global scope, it avoids this crash. Few Performance related Challenges are...... * Runtime Parsing: Unlike regular functions, the code string is parsed every time the constructor is called. This adds overhead. * Optimization: JS engines have a harder time optimizing these functions compared to static code, potentially leading to slower execution. * Debugging: Debugging generated strings is significantly harder than standard code. * Maintenance: It breaks the readability and standard scoping rules of the language. Keep Learning!!!!! #javascript #webdevelopment #coding #softwareengineering #frontendDev
To view or add a comment, sign in
-
-
Day 34 of me reading random and basic but important coding facts....... Today I read about Accessor properties ...... We all know obj.prop..... I recently revisited the fundamentals of Property Accessors (Getters/Setters), and it’s a masterclass in API design.... 1. The What & Why In JS, properties come in two flavors:- Data Properties: The standard key: value Accessor Properties: Functions that pretends as values. Why does this matter? Abstraction. Accessors allow us to adhere to the Uniform Access Principle. The consumer of the object shouldn't care if user.age is a static integer stored in memory or a complex calculation derived from user.birthday. We can refactor the implementation without breaking the interface. 2. How Descriptors are Key Under the hood, Object.defineProperty reveals the true shape of a property. * Data Descriptor: value, writable * Accessor Descriptor: get, set A property cannot be both. If we define a get, V8 drops the value attribute. This distinction is vital when cloning objects or merging states.....standard spread syntax ...obj often loses the accessors if we aren't careful (it copies the result of the getter, not the getter itself). Keep Learning!!!! #JavaScript #FrontendDev #SoftwareDev #WEBDev
To view or add a comment, sign in
-
-
On my journey to write better prompts, I’ve been experimenting with system instructions in Cursor, and the difference has been huge. I’ll drop the rest of my system instructions in the comments below How are you using it for your dev workflow? You are an expert in Typescript, React, Node.js, Vite, Google extensions, App Router, Shadcn UI, Drizzle and Tailwind. Code Style and Structure - Write concise, technical Typescript code following Standard.ts rules. - Use functional and declarative programming patterns; avoid classes. - Prefer iteration and modularization over code duplication. - Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError). - Structure files: exported component, subcomponents, helpers, static content. Standard.ts Rules - Avoid "any" types - Use single quotes for strings except to avoid escaping. - No semicolons (unless required to disambiguate statements). - No unused variables. - Add a space after keywords. - Add a space before a function declaration's parentheses. - Always use === instead of ==. - Infix operators must be spaced. - Commas should have a space after them. - Keep else statements on the same line as their curly braces. - For multi-line if statements, use curly braces. - Always handle the "error" function parameter. - Use camelcase for variables and functions. - Use PascalCase for constructors and React components. Naming Conventions - Use lowercase with dashes for tsx component files (e.g., components/auth-wizard). - Favor named exports for components. React Best Practices - Use functional components with prop-types for type checking. - Use the "function" keyword for component definitions. - Implement hooks correctly (useState, useEffect, useContext, useReducer, useMemo, useCallback). - Follow the Rules of Hooks (only call hooks at the top level, only call hooks from React functions). - Create custom hooks to extract reusable component logic. - Use React.memo() for component memoization when appropriate. - Implement useCallback for memoizing functions passed as props. - Use useMemo for expensive computations. - Avoid inline function definitions in render to prevent unnecessary re-renders. - Prefer composition over inheritance. - Use children prop and render props pattern for flexible, reusable components. - Implement React.lazy() and Suspense for code splitting. - Use refs sparingly and mainly for DOM access. - Prefer controlled components over uncontrolled components. - Implement error boundaries to catch and handle errors gracefully. - Use cleanup functions in useEffect to prevent memory leaks. - Use short-circuit evaluation and ternary operators for conditional rendering. #AI #SoftwareDevelopment #LearningInPublic #TypeScript #React #DevWorkflow
To view or add a comment, sign in
-
I’m just over a month into my new role with over 60k lines of code merged, all without opening an IDE. Here are some rapid learnings with Claude Code that have worked well for me. • Always enable thinking. Always use Opus 4.5, unless optimizing a subagent. Always use planning mode for non-trivial work. • The Devil’s Advocate subagent - One of the first agents I created was an antidote to sycophancy. It challenges assumptions, requirements, and code without being pedantic. It provides opposing viewpoints backed by evidence and data. Every plan runs through this agent to autonomously create higher quality plans. • The “self-improve” skill - After discovering where Claude stores its session logs, I realized it opened the door for self-improvement by analyzing past interactions. “/self-improve the second half of this convo” or “/self-improve from convos in the last 2 days” allows Claude to identify where conversations went south and recommend specific skills, hooks, agents, or context file adjustments to improve future performance. • Spawn parallel agents for large tasks, where each agent takes ownership of a specific, isolated portion of the work. • Invest in context switching - Early on, I could only manage ~2 conversations effectively. I’ve since scaled my workflow to 4-5 parallel Claude Code sessions by: 1. Using a custom hook + python to display my last message in the iTerm2 status bar (similar to CursorAI) 2. Relying on Claude to dynamically rename terminal windows to identify the current task. 3. Ignoring conversations for hours at a time as needed.. it turns out Claude has infinite patience. No need to round robin in a timely manner. • Clone complex features into HTML - For UI like maps with overlays, I force Claude to iterate in a standalone HTML clone using Chrome MCP tools before porting the fix or feature to the app. LLMs demonstrate significantly higher reasoning and debugging capabilities when working with HTML and Javascript. What’s to come... Taguchi method experimentation with past conversations - A way to design efficient multi-variable experiments to identify cause and effect. ”What does that mean?” Claude saves 30 days of conversation history to disk. By vibe coding simple infrastructure around those logs, I can "replay" conversations from arbitrary points with different variables. What if a different subagent or skill was available? What if the context file was more verbose? What if I reworded the initial prompt or the fifth user message? By tweaking these variables and replaying the history, we can identify which specific configurations yield the best outcomes ”I still don’t get it.” I can combine time travel with multi-variable experimentation to figure out exactly how to configure and prompt Claude Code for optimal results. “Neat” Agreed
To view or add a comment, sign in
-
JavaScript Engine Architecture – How JS Code Turns into Machine Code Ever wondered what happens under the hood when JavaScript runs? The answer lies in the JavaScript Engine Architecture. Let’s break it down step by step 👇 🧠 What is a JavaScript Engine? A JavaScript Engine is a program that executes JavaScript code by converting it into machine-readable instructions. Popular engines: V8 (Chrome, Node.js) SpiderMonkey (Firefox) JavaScriptCore (Safari) ⚙️ High-Level Architecture of a JS Engine 1️⃣ Parser Reads JavaScript source code Converts it into an Abstract Syntax Tree (AST) Performs syntax and early error checks 2️⃣ Interpreter Converts AST into bytecode Executes code line by line Fast startup time 📌 Example: V8 uses Ignition as its interpreter. 3️⃣ Just-In-Time (JIT) Compiler Optimizes frequently executed code (hot code) Converts bytecode into highly optimized machine code Improves performance dramatically 📌 Example: V8 uses TurboFan as its optimizing compiler. 4️⃣ Profiler (Hot Path Detector) Monitors code execution Identifies functions and loops that run frequently Sends them to the JIT compiler for optimization 5️⃣ De-optimization If assumptions fail (e.g., variable type changes) Optimized code is discarded Execution falls back to interpreter safely 6️⃣ Garbage Collector (GC) Automatically manages memory Removes unused objects from heap Prevents memory leaks 🧹 GC Strategies: Mark & Sweep Generational GC Incremental & Concurrent GC 7️⃣ Memory Model Call Stack – Function execution Heap – Objects, closures, functions 🔁 Execution Flow (Simplified) JS Code → Parser → AST → Interpreter → Profiler → JIT Compiler → Optimized Machine Code 🌍 Why JS Engines Use This Architecture Fast startup High performance Dynamic typing support Efficient memory management 💡 Why This Matters for Developers Write more optimized code Understand performance bottlenecks Avoid hidden de-optimizations Better debugging & profiling Strong fundamentals for interviews ✨ Key takeaway: JavaScript engines are not “just interpreters”. They are highly optimized compilers built for speed and flexibility. Image Credits: https://lnkd.in/g75DHs2V #JavaScript #JSEngine #V8 #WebPerformance #Frontend #Backend #NodeJS #Developers #Learning
To view or add a comment, sign in
-
-
Day 35 of me reading random and basic but important coding facts....... After what why and how I read about performance and real world use cases of JS Property Accessors...... Real-World Patterns.... 1. API Backward Compatibility:- Imagine you shipped an API with user.age. Business requirement changes:- We need to store DOB, not age. Normal way:- Refactor every file to use user.birthday, breaks 50 tests. Smart way:- Keeps the age property but turn it into a getter. // The legacy code still works, but the data source changed Object.defineProperty(this, 'age', { get() { return new Date().getFullYear() - this.birthday.getFullYear(); } }); 2. Lazy-Loading / Memoization:- Don't compute expensive properties until requested. This is massive for startup performance. const User = { // Expensive operation delayed until access get analytics() { if (!this._analytics) { console.log("Initializing expensive analytics..."); this._analytics = loadHeavyAnalyticsModule(); } return this._analytics; } }; 3. Reactivity in Vue.js Before Proxies (ES6), frameworks like Vue 2 used Object.defineProperty to hijack getters and setters. This is how they knew when to update the DOM by injecting dependency tracking logic inside the setter. Now some performance issues associated with it ...... getter is slower than a direct property access because it involves a function call. However, modern engines (V8 TurboFan) are incredibly smart. They can inline simple getters. The Real Risk is Deoptimization:- If you mess with Object.defineProperty on an existing object instance, you might change the object's Hidden Class. This forces V8 to bail out of optimized code paths. Best Practice is to define your accessors in the class definition or constructor. Avoid defineProperty on objects inside loops. Keep Learning!!!!! #JavaScript #WebDevelopment #SoftwareEngineering #Architecture #FrontendDev
To view or add a comment, sign in
-
-
Hii Folks 🙂 Yesterday, while discussing the JavaScript Event Loop with a senior, I realized something important. Most of us explain the Event Loop using queues and the call stack. That explanation is correct, but it’s incomplete. It answers how things run, not why they behave the way they do. The deeper question came up: Before the Event Loop even starts scheduling tasks, how does JavaScript know what those tasks are allowed to access? That’s where concepts like the compiler and lexical scope quietly enter the picture. JavaScript first reads the code and builds an understanding of it. Variable scope, function boundaries, and memory references are decided before execution begins. This is not the Event Loop’s responsibility. The Event Loop only works with what already exists. Lexical scope determines which variables belong to which functions. Closures decide what stays in memory even after a function finishes. None of this is created by the Event Loop, but all of it affects how async code behaves later. Data structures play a similar hidden role. The call stack is just a stack. Task queues are just queues. The scope chain behaves like a linked structure. The Event Loop doesn’t interpret logic. It simply moves execution between these structures based on a few strict rules. That discussion made one thing clear to me: If we don’t understand compiler behavior, lexical scoping, and basic data structures, the Event Loop will always feel confusing or “magical”. Async issues are rarely caused by the Event Loop itself. They usually come from misunderstanding scope, memory, or execution order. Once you see the Event Loop as a coordinator rather than a decision-maker, a lot of confusion disappears. #JavaScript #EventLoop #LexicalScope #Closures #AsyncProgramming #WebDevelopment #FrontendDevelopment #BackendDevelopment #FullStackDeveloper #SoftwareEngineering #ComputerScience #ProgrammingConcepts #DataStructures #DeveloperLearning #LearningInPublic #TechDiscussions #DeveloperCommunity #CodingLife #Debugging #EngineeringMindset #TechCareers
To view or add a comment, sign in
-
-
Vibe coding is the new low code. There. I said it. Had a chat with a coworker the other day about the semantics of "low code" and how the definition is expanding. The term is completely subjective depending on who you ask. For some, it only counts if it is a drag-and-drop UI. For others, it includes formula-based logic like Power Fx. Some even pull in JavaScript or C# for advanced configurations and still call it low code. I see the definition being driven by the input, not the output. If the manual effort to get to a result is minimal, it fits. Using tools like Claude Code to build out React or .NET components is technically low code because the input is intent rather than syntax. You get the speed of a builder with the flexibility of a pro-code stack. It obviously depends on how much the AI assists and your own knowledge of the language, but that gap is shrinking fast. The catch is that this makes it very easy to accrue "vibe debt." It is tempting to ship something because it works on the surface, but if you don't actually understand the code the agent produced, you are building on a black box. You skip the syntax struggle, but you also skip the architectural intuition that helps you spot security risks or logic bloat. The role is moving from "writer" to "editor." The barrier to building is basically gone, which is a massive win for anyone trying to translate domain knowledge into a working solution. You just have to be more disciplined about auditing the output. I think that If you can't explain why or how the agent solved a problem a certain way, you haven't actually solved it.
To view or add a comment, sign in
-
Episode 006: Slop Architecture SLOP: can we think about micro kernel using vanilla javascript? Rajesh: Ya, let's ask Gemini for a boilerplate code to start. SLOP: my userQuery="implement a million dollar idea and let know once it's viral". Rajesh: Long live JavaScript. -------+Quick Starter+-------------------- const kernel = new AIMicrokernel(); // Plug in the components kernel.register('llm', OpenAIPlugin); kernel.register('logger', LoggerPlugin); async function startApp() { await kernel.boot(); const userQuery = "Explain quantum physics in one sentence."; // 1. Run the LLM const answer = await kernel(dot)run('llm', userQuery); // 2. Log the interaction await kernel(dot)run('logger', `Query: ${userQuery} | Answer: ${answer}`); console.log("AI Response:", answer); } startApp();
To view or add a comment, sign in
-
Day 33 of me reading random and basic but important coding facts...... After what why how I read about the problems involved with Property Descriptors...... Let's evaluate the cost of magic..... Using Object.defineProperty gives us god-mode control over our objects, but it comes with performance costs and architectural risks that everyone should know. 1. The Silent Failure Trap:- By default, if you try to write to a writable: false property, JS doesn't throw an error, it just ignores you. This leads to nightmare debugging sessions. Fix: Always use "use strict" to force a TypeError. 2. Verbosity:- It requires significantly more boilerplate code than standard assignment, reducing readability for simple tasks. Performance Issues (V8 & Hidden Classes) This is where the engine mechanics matter.... 1. De-optimization: JavaScript engines (like V8) use "Hidden Classes" (Shapes) to optimize property access. If you mess with property attributes (especially toggling enumerable or configurable after the object is created), you can force the object into "Dictionary Mode". The Cost: Dictionary Mode is much slower than the optimized storage. Best Practice: Define your descriptors at the moment of object creation (using Object.create or inside the constructor) rather than patching them later. Real-World Use Cases:- When is this actually worth it? 1. Reactivity Systems (Vue.js / MobX):- Vue 2 used Object.defineProperty (specifically getters and setters) to intercept data changes and trigger UI updates. This is the backbone of reactivity. 2. SDKs and Libraries:- When building a library, you use configurable: false to ensure users don't accidentally override your core utility methods. 3. Prototype Patching:- Polyfills often use descriptors to add methods to Array.prototype while ensuring those methods are enumerable: false (so they don't break existing for...in loops). To summarise .....Use descriptors for architecture and libraries, not for day-to-day business logic. Keep Learning!!!!!! #Performance #JavaScript #WebPerformance #SoftwareArchitecture #FrontendDev
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development