Node.js is single-threaded... or is it? Meet libuv, the hidden powerhouse. 🏗️ We all know the mantra: "Node.js is single-threaded and non-blocking." But have you ever stopped to ask how a single thread can handle thousands of concurrent database queries and file reads without breaking a sweat? The answer isn't just "Magic"—it’s libuv. 🧐 What is libuv? Libuv is a multi-platform C library originally written for Node.js. While the V8 engine handles your JavaScript, libuv handles everything else: the Event Loop, the Thread Pool, and all things Asynchronous. 🛠️ The 2 Secret Weapons of libuv: 1. The Event Loop (The Conductor) 🎼 This is the heart of Node.js. It manages the execution of callbacks. It doesn’t do the heavy lifting itself; instead, it coordinates tasks across different phases (Timers, I/O Polling, Check, etc.). 2. The Thread Pool (The Workers) 👷♂️ Wait, I thought Node was single-threaded? JavaScript execution is, but libuv maintains a Thread Pool (4 threads by default). When you do something "heavy" like: Reading a file (fs.readFile) Hashing a password (crypto.pbkdf2) DNS lookups ...libuv offloads these tasks to its worker threads so your main thread stays free to handle new requests. 🔄 How it works in 3 steps: The Request: You call an async function in JS. The Hand-off: Node.js passes the task to libuv. Libuv either asks the OS for help (for networking) or uses its Thread Pool (for files). The Callback: Once the task is done, libuv pushes the callback into the Event Loop to be executed back in your JavaScript code. 💡 Why should you care? Understanding libuv is the difference between a developer who just writes code and an engineer who knows how to optimize it. Know when your thread pool is a bottleneck. Understand why setImmediate and setTimeout behave differently. Learn to scale apps by tweaking UV_THREADPOOL_SIZE. Are you diving into Node.js internals this year? Drop a "Building" in the comments if you want more deep dives into the Node.js architecture! 👇 #NodeJS #Backend #SoftwareEngineering #libuv #JavaScript #WebPerf #ProgrammingTips
Node.js: Beyond Single-Threaded with libuv
More Relevant Posts
-
🤔 Ever imported a package and got hit with one of these? ReferenceError: require is not defined Cannot use import statement outside a module “Why does this library work in Node but not in the browser?” That’s almost always ESM vs CommonJS. 🧠 JavaScript interview question What’s the difference between ESM and CommonJS, and when would you use each? ✅ Short answer CommonJS (CJS) uses require() / module.exports and loads modules at runtime (Node’s classic system). ES Modules (ESM) uses import / export, is static by default, and enables better tooling like tree-shaking. 🔍 The real differences that matter in real projects 📦 Syntax CJS: const x = require("x") + module.exports = ... ESM: import x from "x" + export ... ⏱️ When it resolves CJS resolves while your code runs → can do if (...) require(...) ESM is statically analyzed (imports are at the top level) → bundlers can optimize it 🌳 Tree-shaking ESM: ✅ easy to remove unused exports (webpack/vite/rollup love this) CJS: ❌ harder because exports can be dynamic 🔁 Interop gotchas Importing CJS in ESM can lead to “default export weirdness” In Node, ESM often needs "type": "module" or .mjs 💻 Tiny examples CJS // math.js function add(a, b) { return a + b; } module.exports = { add }; // index.js const math = require("./math"); console.log(math.add(2, 3)); ESM // math.mjs export function add(a, b) { return a + b; } // index.mjs import { add } from "./math.mjs"; console.log(add(2, 3)); ⚛️ React / Next.js practical note Modern React apps + Next.js code is ESM-first (bundlers + tree-shaking). But some Node tooling / older libs still ship CJS → you’ll see mixed ecosystems in monorepos. 🎯 Rule of thumb ✅ Choose ESM for modern apps and libraries. ✅ Use CJS when you must support older Node setups or legacy packages. #JavaScript #Frontend #WebDevelopment #NodeJS #React #Nextjs #TypeScript #CodingInterview
To view or add a comment, sign in
-
React 19 - new game-changer: the use hook! If you're tired of writing the same useEffect + useState boilerplate for every single API call, this one's for you. Here’s why your code is about to get a whole lot cleaner. 🛑 The "Old" Way (Boilerplate Central) We’ve all been there. You want to fetch data, so you: Initialize state (data, loading, error). Trigger useEffect. Handle the async promise. Update state... and hope you didn't forget the cleanup! ✨ The "React 19" Way (Clean & Declarative) With the new use hook, you can "unwrap" promises directly in your render. No more manual loading states—React handles the "waiting" part using Suspense. JavaScript import { use } from 'react'; // 1. Your standard fetch function const fetchUserData = fetch('/api/user').then(res => res.json()); function UserProfile() { // 2. Just 'use' it! const user = use(fetchUserData); // 3. No 'if (loading)' needed here! return <h1>Welcome back, {user.name}!</h1>; } 💡 Why developers love it: Conditional Hooks: You can actually use use inside if statements or loops. (Yes, really!) Bye-bye Boilerplate: It works hand-in-hand with <Suspense> for loading and <ErrorBoundary> for errors. Readable Code: Your component focuses on the UI, not the plumbing. Is useEffect dead? Not quite—but for data fetching, the use hook is definitely the new favorite child. 👶 Which should you use? Go with use if you are starting a new React 19 project and want a "native" feel with less code. It’s perfect for simple data fetching and working with Context. Stick with useEffect if you are maintaining an older codebase or need to synchronize with non-React systems (like a WebSocket or a manual DOM library). ** New Era ** Combining the use hook with an ErrorBoundary and Suspense is the "Holy Trinity" of React 19 data fetching. It moves the complexity out of your component logic and into your component structure. Check the attached image for more details. The "Safety Net" Pattern Think of Suspense as your Loading Spinner and ErrorBoundary as your Catch-All for when the API goes down. Why this is a massive upgrade: No "Jank": In the old useEffect way, the component would render once with data = null, then re-render when the data arrived. This often caused layout shifts. With use, the component doesn't even finish its first render until the data is ready. Centralized Logic: If you have 10 components fetching data, you don't need 10 if (loading) checks. You just wrap them all in one high-level Suspense boundary. Better DX (Developer Experience): Your UserProfile component is now "pure." It doesn't care about fetch logic; it just assumes the data is there when it runs. Pro Tip: Remember that React 19 makes this pattern even cleaner. You can now pass that ref down into the same Suspense tree without needing forwardRef. Stay tuned for my upcoming post regarding useRef. #React19, #ReactJS, #useHook, #ReactHooks,#JavaScript, #CleanCode, #SoftwareDevelopment, #Frontend, #ModernWeb,#SystemDesign
To view or add a comment, sign in
-
-
🧠 JavaScript Fundamentals Here are some of the most common operators in JavaScript: 🔹 Assignment = → assigns a value 🔸Example: let a = 2; _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ 🔹 Math Operators + (addition) - (subtraction) * (multiplication) / (division) 🔸Example: a * 3 _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ 🔹 Compound Assignment +=, -=, *=, /= 🔸 Example: a += 2; // same as a = a + 2 _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ 🔹 Increment / Decrement ++ → increase by 1 -- → decrease by 1 🔸a++; _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ 🔹 Object Property Access . → access properties 🔸 console.log(); obj.a; obj["a"]; _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ 🔹 Equality Operators == → loose equality === → strict equality != → loose not equal !== → strict not equal The difference between == and === is especially important. _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ 🔹 Comparison Operators <, >, <=, >= _-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_ 🔹 Logical Operators && → and || → or 🔸Example: a || b #cs_internship #web #step1 https://lnkd.in/eCQZ9zVF
To view or add a comment, sign in
-
Just dropped my 2nd blog in my JS Unlocked series! 🚀 This time — Variables & Data Types in JavaScript 👇 I'm now going deeper into JavaScript fundamentals as part of Web Dev Cohort 2026. In this one I cover: ✅ What variables are (with a real-life box analogy) ✅ var vs let vs const — when to use what ✅ All 5 primitive data types with real examples ✅ Scope explained simply — no jargon ✅ A hands-on challenge to test yourself If you're just starting out with JavaScript, this one's for you 🙏 🔗 https://lnkd.in/dEkjzfMq Thanks to #HiteshChoudhary Sir, #PiyushGarg and #AkashKadlag for building this cohort 💛 #JavaScript #WebDevelopment #Hashnode #WebDevCohort2026 #LearningInPublic #Frontend #JS
To view or add a comment, sign in
-
How do you deep clone an object in JavaScript? If you answered JSON.parse(JSON.stringify(obj)), you're not alone. It's been the go-to hack for over a decade. But it's broken in ways that will bite you at the worst possible time. const original = { name: "Pranjul", joined: new Date("2024-01-01"), skills: new Set(["React", "CSS", "TypeScript"]) }; const cloned = JSON.parse(JSON.stringify(original)); console.log(cloned.joined); // "2024-01-01T00:00:00.000Z" (string, NOT a Date) console.log(cloned.skills); // {} (empty object, NOT a Set) Everything silently broke. No errors and warnings. Your Date is now a string. Your Set is an empty object. And your code downstream has no idea. This is why structuredClone() exists. const cloned = structuredClone(original); console.log(cloned.joined); // Date object (correct!) console.log(cloned.skills); // Set {"React", "CSS", "TypeScript"} (correct!) One function call. Everything cloned properly. Let me highlight that circular reference row: const obj = { name: "Pranjul" }; obj.self = obj; // circular reference JSON.parse(JSON.stringify(obj)); // TypeError: Converting circular structure to JSON structuredClone(obj); // Works perfectly. Returns a proper deep clone. If you've ever hit that circular reference error in production, you know how painful it is. structuredClone just handles it. When JSON.parse/stringify still wins: There is one scenario where the JSON approach has an advantage: speed for simple, flat objects with only strings and numbers. The JSON functions are heavily optimized in V8. For this kind of data, JSON is faster: const simpleData = { id: 1, name: "Pranjul", active: true, tags: ["frontend", "react"] }; But the moment you have Dates, Sets, Maps, undefined, or nested structures, structuredClone is the correct choice. What structuredClone CANNOT clone: Not everything is supported. Functions and DOM nodes can't be cloned. Class instances get cloned as plain objects and lose their methods. That's by design. My rule of thumb: ✅ Need to clone plain data with possible Dates/Sets/Maps? Use structuredClone ✅ Need to serialize data for storage or network? Use JSON.stringify ✅ Need to clone class instances with methods? Write a custom clone method structuredClone ships in every modern browser and Node.js 17+. There's no reason to keep using the JSON hack for deep cloning in 2026. What's a JavaScript built-in that you learned about way too late? Share below 👇 w3schools.com JavaScript Mastery JavaScript Developer Frontend Masters
To view or add a comment, sign in
-
-
TypeScript 6.0 Beta is here! 🚀 This is a historic release: it is the final version based on the JavaScript codebase. The team is actively rewriting TS 7.0 in Go for massive native speed gains. TS 6.0 is the bridge to that high-performance future. 🌉 🧠 Smarter Inference Methods that don't use this now get better type inference. TS 6.0 detects if this is unused and prioritizes context from other arguments. This fixes those annoying "implicitly has an 'any' type" errors when method arguments were out of order. 📦 New Subpath Imports Node.js allows subpath imports starting with just #/ (e.g. #/utils). TS 6.0 now supports this under node20, nodenext, and bundler resolution. You can finally use those ultra-clean internal import paths! ✨ 🛠️ Bundler + CommonJS A highly requested combo! You can now use --moduleResolution bundler with --module commonjs. Perfect for projects that still ship CJS but rely on modern bundler resolution logic. A smoother upgrade path away from node10. 🔮 Prep for TS 7.0 (Go) New flag: --stableTypeOrdering This ensures your type IDs match the behavior of the upcoming Go compiler. It guarantees your .d.ts emits stay consistent when you eventually switch to the native engine next major version. 📅 ES2026 & Temporal The long-awaited Temporal API is coming! TS 6.0 adds built-in types for Temporal (fixing Date/Time pain), plus RegExp.escape and Map upsert methods (getOrInsert). Enable via target: es2026 or the lib. 🧹 Spring Cleaning To slim down for the port, TS 6.0 is deprecating old features: ❌ target: es5 is deprecated ❌ no-default-lib is deprecated ⚠️ outFile, amd, and system modules are discouraged. Time to update those configs! 🔒 Stricter Defaults New projects get better safety out of the box. Defaults now include: ✅ strict: true ✅ module: esnext ✅ noUncheckedSideEffectImports: true 👉 Try it today Get your codebase ready for the future: » npm install -D typescript@beta Test it now to ensure you're ready for the migration to the native Go compiler in v7.0!
To view or add a comment, sign in
-
-
Next.js 16 Migration Alert: How to update your Utility Functions for Async Requests 🛠️ If you are upgrading to Next.js 16, your "traditional" utility functions are likely about to break. With Async Request APIs now standardized, params, searchParams, cookies(), and headers() are all Promises. This is a fundamental shift for the React Compiler to optimize rendering. Here is how to refactor your code to stay compliant. 1️⃣ The "Utility Function" Refactor Previously, you could pass params into a helper function and access them immediately. In v16, you must handle the Promise. ❌ Old Way (v14/v15): function getCategory(params) { return params.category; // This will now be undefined or error } ✅ New Way (v16): async function getCategory(params: Promise<{ category: string }>) { const { category } = await params; return category; } 2️⃣ Handling cookies() and headers() These are no longer static snapshots. They are dynamic functions that must be awaited at the point of use to avoid blocking the entire route's execution. ❌ Deprecated: const cookieStore = cookies(); const theme = cookieStore.get('theme'); ✅ Next.js 16 Standard: const cookieStore = await cookies(); const theme = cookieStore.get('theme'); 3️⃣ The Pattern: "Pass-Through" vs. "Awaited" To keep your components clean, decide where you want to resolve the Promise. Option A (In Page): Await the params in the Page component and pass the values down to children. (Best for simple props). Option B (In Component): Pass the Promise down and let the child component use() it or await it. (Best for deep component trees). // app/shop/[id]/page.tsx export default function Page({ params }) { // Pass the promise directly to a Client Component return <ProductDetails params={params} />; } 🚀 Migration Checklist: [ ] Search for all instances of params and searchParams in your Page and Layout files. [ ] Update TypeScript interfaces to use Promise<T>. [ ] Audit your Middleware—if you were doing complex body manipulation, migrate that logic to proxy.ts or Route Handlers. [ ] Ensure every Parallel Route (@folder) has a default.js to prevent 404s during navigation. The Strategy: Don't fight the async nature of Next.js 16. By embracing await, you allow the React Compiler to "hole-punch" your UI—rendering static content instantly while the dynamic parts catch up. How is your team handling the v16 migration? Are you automating the refactor with Codemods, or doing a manual audit? comment 👇 #NextJS16 #ReactJS #WebDevelopment #CodingLife #SoftwareArchitecture #TypeScript #Vercel #Frontend
To view or add a comment, sign in
-
Wasm vs. JavaScript: Who wins at a million rows? By processing millions of CSV rows directly in the browser, this tutorial shows how WebAssembly outpaces JavaScript when the data gets big. By now it’s pretty clear that JavaScript needs WebAssembly (Wasm) to perform heavy computational tasks. In the past few weeks we’ve covered the basics, did a side-by-side image processing comparison, and saw the benefits of using Wasm with web workers. In this next tutorial, let’s apply the same principle to real-world data using large CSV files. Instead of images or web worker computations, we’ll fetch and count millions of rows directly in the browser to compare JavaScript versus Wasm performance side by side. This will show how Wasm can make even seemingly simple tasks like counting rows lightning fast. https://lnkd.in/e7YWVfYR Please follow Divye Dwivedi for such content. #DevSecOps,#SecureDevOps,#CyberSecurity,#SecurityAutomation,#CloudSecurity,#InfrastructureSecurity,#DevOpsSecurity,#ContinuousSecurity, #SecurityByDesign, #SecurityAsCode, #ApplicationSecurity,#ComplianceAutomation,#CloudSecurityPosture, #SecuringTheCloud,#AI4Security #DevOpsSecurity #IntelligentSecurity #AppSecurityTesting #CloudSecuritySolutions #ResilientAI #AdaptiveSecurity #SecurityFirst #AIDrivenSecurity #FullStackSecurity #ModernAppSecurity #SecurityInTheCloud #EmbeddedSecurity #SmartCyberDefense #ProactiveSecurity
To view or add a comment, sign in
-
How does Node.js handle thousands of requests on a single thread? JavaScript was originally designed to run inside browsers. It could manipulate the DOM, handle clicks, and run scripts — but it couldn’t access the file system or create servers. Then Node.js changed everything. But Node.js is not just “JavaScript running on a server.” Under the hood, it works because three powerful systems work together: 1. V8 Engine Compiles and executes JavaScript code. 2. LIBUV Handles asynchronous operations like file I/O, networking, DNS, and timers using a thread pool. 3. Node.js Bindings (C++) Act as a bridge between JavaScript and low-level system operations. Because of this architecture, Node.js can stay non-blocking and highly scalable, even though JavaScript itself runs on a single thread. For example: const fs = require("fs"); fs.readFile("data.txt", (err, data) => { console.log(data); }); console.log("I run first!"); Behind the scenes: 1 V8 executes the JavaScript 2 LIBUV sends the file task to the thread pool 3 The event loop keeps running other code 4 When the task finishes, the callback goes to the queue 5 V8 executes the callback That’s why you see: I run first! Hello from data.txt Understanding this architecture helped me connect concepts like: • Event Loop • Thread Pool • Async callbacks • Non-blocking I/O So I wrote a beginner-friendly explanation of Node.js architecture step by step https://lnkd.in/gsjDYgdP #NodeJS #JavaScript #BackendDevelopment #WebDevelopment #ChaiAurCode
To view or add a comment, sign in
-
🧠 𝗔 𝗨𝗜 𝗷𝗼𝗯 𝗶𝗻 𝗖𝗜 𝗸𝗲𝗽𝘁 𝗰𝗿𝗮𝘀𝗵𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗮 “𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 𝗵𝗲𝗮𝗽 𝗼𝘂𝘁 𝗼𝗳 𝗺𝗲𝗺𝗼𝗿𝘆” 𝗲𝗿𝗿𝗼𝗿. No big deal, right? Just increase Node’s heap size. Except… 𝗶𝘁 𝘄𝗮𝘀 𝗮𝗹𝗿𝗲𝗮𝗱𝘆 𝘀𝗲𝘁 𝘁𝗼 𝟴𝗚𝗕. --max-old-space-size=8192 And yet the pipeline kept failing with: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory Something didn’t add up. 🔎 𝗗𝗶𝗴𝗴𝗶𝗻𝗴 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝗹𝗼𝗴𝘀 Then one line in the GC logs stood out: Mark-Compact ... 2020 MB Wait a second. If Node can use 8GB, why is it crashing at ~2GB? That was the clue. The process wasn’t actually using the configured heap size - it was effectively 𝗰𝗮𝗽𝗽𝗲𝗱 𝗮𝘁 ~𝟮𝗚𝗕. 🧠 𝗪𝗵𝗮𝘁 𝘄𝗮𝘀 𝗵𝗮𝗽𝗽𝗲𝗻𝗶𝗻𝗴 𝘂𝗻𝗱𝗲𝗿 𝘁𝗵𝗲 𝗵𝗼𝗼𝗱 Node (through the V8 engine) stores JavaScript objects in a heap. Long-lived objects live in something called the Old Generation (Old Space). When running tools like the TypeScript compiler, a lot of large in-memory structures are created: • AST nodes • symbol tables • type graphs • generic type instantiations • dependency graphs In larger codebases, these can easily consume multiple gigabytes of memory. When that heap fills up and garbage collection can’t reclaim enough memory, Node throws: Ineffective mark-compacts near heap limit ⚙️ 𝗧𝗵𝗲 𝗮𝗰𝘁𝘂𝗮𝗹 𝗿𝗼𝗼𝘁 𝗰𝗮𝘂𝘀𝗲 The issue wasn’t the Node configuration at all. The CI job was running on a general-large runner, which effectively constrained the memory available to the process. So even though Node was configured to allow 8GB, the environment itself limited how much memory could actually be used. ✅ 𝗧𝗵𝗲 𝗳𝗶𝘅 Moving the job to a larger runner solved it: runs-on: general-xlarge With more system memory available, Node could finally allocate a larger heap - and the TypeScript compilation ran successfully. 💡 Takeaways • Large TypeScript builds can consume far more memory than expected • Crashes around ~2000MB in GC logs often indicate a heap ceiling • Increasing Node’s heap limit won’t help if the underlying runner is memory-constrained • Sometimes debugging CI failures is about understanding the runtime + infrastructure, not just the code Curious if others have run into similar TypeScript or Node memory issues in CI pipelines? #FrontendEngineering #NodeJS #TypeScript #CICD #SoftwareEngineering #JavaScript
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development