libuv Thread Pool — the hidden workers behind Node.js Node.js is single-threaded… but not everything runs on the event loop. One of the most misunderstood parts of Node.js is the libuv thread pool. When we say Node.js is non-blocking, what we really mean is this: blocking work is quietly offloaded somewhere else. That “somewhere else” is the libuv thread pool. What actually runs in the thread pool File system operations DNS lookups Compression and crypto Some native addons These tasks don’t block the event loop directly. They are executed by background worker threads managed by libuv. Why this matters in real applications The thread pool has a default size of 4 threads. If all of them are busy, new tasks wait in a queue. This is why: – Heavy file uploads can slow down unrelated requests – CPU-heavy crypto can impact API latency – “Async” code can still cause performance issues Nothing is blocked… but everything is waiting. A common production mistake Assuming async APIs mean unlimited parallelism. They don’t. If you overload the thread pool, your app stays alive but becomes slow and unpredictable. How experienced teams handle this Avoid CPU-heavy work on API servers Use streams instead of buffering large files Tune UV_THREADPOOL_SIZE only when you understand the trade-offs Offload heavy processing to workers or separate services The key takeaway Node.js performance issues are rarely about JavaScript. They’re usually about understanding what runs on the event loop and what doesn’t. Once you understand the libuv thread pool, many “mysterious” Node.js bottlenecks suddenly make sense. #NodeJS #BackendEngineering #SystemDesign #JavaScript #WebPerformance #NodeInternals #SoftwareArchitecture #FullStackDevelopment
Node.js Performance: Understanding libuv Thread Pool
More Relevant Posts
-
🚀 Does Node.js Really Have 4 Threads? A common misconception: > “Node.js is multi-threaded because it has 4 threads.” Let’s structure this properly 👇 1️⃣ JavaScript Execution in Node.js Node.js runs JavaScript on a single main thread using: V8 Engine Event Loop Everything below runs on ONE thread: Express routes Middleware Business logic Promise callbacks async/await Timers 👉 This is why Node.js is called single-threaded. 2️⃣ Where Do the 4 Threads Come From? Node.js uses libuv internally. libuv provides: A thread pool Default size = 4 threads These threads handle blocking system-level tasks. 3️⃣ What Actually Uses the Thread Pool? The 4 threads are used for: File system operations (fs) Crypto tasks (bcrypt, pbkdf2) Compression (zlib) DNS lookups (non-network) Flow: 1. Blocking task detected 2. Task offloaded to libuv 3. One thread processes it 4. Result returned to Event Loop 5. Callback executed on main thread 4️⃣ Important Clarification Node.js is: ✅ Single-threaded for JavaScript execution ✅ Multi-threaded internally for I/O handling ❌ Not multi-threaded for your business logic If true parallel JavaScript execution is required: worker_threads cluster Multiple Node processes Understanding this distinction helps design better APIs, avoid CPU blocking, and build scalable backend systems. #NodeJS #JavaScript #BackendDevelopment #EventLoop #Libuv #AsyncProgramming #ScalableSystems #SystemDesign #FullStackDevelopment
To view or add a comment, sign in
-
𝗘𝘃𝗲𝗻𝘁 𝗟𝗼𝗼𝗽 𝗜𝘀 𝗡𝗼𝘁 𝗮 𝗠𝗮𝗴𝗶𝗰 🐢 𝙎𝙞𝙣𝙜𝙡𝙚-𝙩𝙝𝙧𝙚𝙖𝙙𝙚𝙙 𝙙𝙤𝙚𝙨𝙣’𝙩 𝙢𝙚𝙖𝙣 “𝙙𝙤𝙣’𝙩 𝙬𝙤𝙧𝙧𝙮 𝙖𝙗𝙤𝙪𝙩 𝙗𝙡𝙤𝙘𝙠𝙞𝙣𝙜” 🐢. Saw a production API freeze this week because someone ran a massive synchronous JSON parse inside a critical route 📉. The event loop is a powerhouse—but it’s also fragile 🌪️. 💡 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: Node.js runs on a single-threaded event loop. Your async callbacks, timers, and I/O tasks queue up here. Heavy computation blocks the loop, freezing everything else. Users wait… and get frustrated. The system kernel handles low-level I/O and scheduling, but your JS thread still needs to stay free to process events. 💡 𝗥𝘂𝗹𝗲 𝗼𝗳 𝘁𝗵𝘂𝗺𝗯: Heavy math or calc, large arrays, or CPU-intensive tasks? Offload them to worker threads 🔄. These are separate threads that can run in parallel without blocking the main event loop. Use Worker in Node.js or libraries like 𝙬𝙤𝙧𝙠𝙚𝙧_𝙩𝙝𝙧𝙚𝙖𝙙𝙨 for real concurrency. Don’t punish your users because going async felt “too much work” 🛑. Profiling latency is far more important than your “hello world” speed 🚀. It’s about smartly using hardware, not just raw specs 💻. https://buff.ly/qZoLzFx #NodeJS #Backend #Performance #SoftwareArchitecture #JavaScript #WebDev #EventLoop #WorkerThreads
To view or add a comment, sign in
-
Mastering Node.js performance often boils down to a deep understanding of its core: the Event Loop. Node.js excels at non-blocking I/O, allowing it to handle many concurrent connections efficiently. However, mistakenly introducing synchronous, CPU-intensive operations can quickly block the Event Loop, turning your highly performant application into a bottleneck. **Insightful Tip:** Always prioritize asynchronous patterns, especially for I/O operations and long-running computations. When faced with a CPU-bound task that cannot be made asynchronous (e.g., complex calculations, heavy data processing), consider offloading it to a worker thread using Node.js's `worker_threads` module, or even a separate microservice. This ensures the main Event Loop remains free to process incoming requests, maintaining your application's responsiveness and scalability. This approach prevents your server from becoming unresponsive under load, delivering a smoother experience for users and ensuring your application can scale effectively. What's your go-to strategy for preventing Event Loop blockages in your Node.js applications? Share your insights below! #Nodejs #EventLoop #PerformanceOptimization #BackendDevelopment #JavaScript **References:** 1. The Node.js Event Loop, Timers, and `process.nextTick()`: Node.js Docs 2. Worker Threads: Node.js Docs
To view or add a comment, sign in
-
Building a browser-based strategy game is essentially a masterclass in frontend state management. Today’s focus on the "Siege of Eger" engine: creating a seamless, type-safe data pipeline from a Supabase backend to an Angular 21 frontend. 🏰 Here is a breakdown of today's architecture evolution: 🔹 Strict Full-Stack Type Safety (Zod) When bridging PostgreSQL and TypeScript, data types like timestamptz can cause silent bugs if not handled correctly. By using Zod to parse the backend DTOs, the raw DB timestamp string is safely transformed into a JavaScript Date object before it ever touches the game logic. If the schema fails, the app catches it immediately. 🔹 Reactive Fetching with httpResource I migrated the data layer away from raw fetch Promises to Angular 19/21's native httpResource. 💡 Why it’s great: It automatically exposes .value(), .isLoading(), and .error() as Signals. This completely eliminates manual loading state boilerplate, handles memory cleanup automatically, and makes building polished UI transitions trivial. 🔹 The Client-Side Game Loop (NgRx SignalStore & RxJS) To make resources "generate" in real-time, you can't ping the database every second. 💡 The Solution: The server acts as the source of truth (saving a timestamp for offline progress), while the local NgRx SignalStore runs an RxJS interval to optimistically calculate the "delta time" and update the UI tick-by-tick. Moving a codebase from a "working prototype" to a "scalable, reactive architecture" is where the real fun begins. What is your go-to pattern for managing high-frequency, real-time state updates in modern frontend frameworks? Let me know below! 👇 #Angular #TypeScript #WebDevelopment #SoftwareArchitecture #RxJS #NgRx #Frontend #Fullstack #Coding #Programming
To view or add a comment, sign in
-
After a long day debugging a production build issue, I finally found something interesting worth sharing. 🥰 While running the final build of my React (Vite) application, I saw this warning: “Some chunks are larger than 500 kB after minification.” 😬 One file was 8MB+ in size. That’s a serious performance red flag 🤧 After investigating 🧐 , the culprit turned out to be the country-state-city package. I had imported it like this: // import { Country, State, City } from "country-state-city"; 🥵 What I didn’t realize initially was that this package includes a massive JSON dataset of all countries, states, and cities worldwide. When imported normally, the entire dataset gets bundled into the main chunk. That means every user downloads the whole world — even if they just need one dropdown. The Solution: Instead of static import, I switched to dynamic import: // const { Country } = await import("country-state-city"); This creates a separate chunk and loads the data only when needed (for example, when the billing tab opens). Result: Smaller initial bundle Faster first load Better performance Cleaner architecture What We Should Focus On To Prevent This 1. Always analyze bundle size before production. 2. Be careful when importing data-heavy libraries. 3. Prefer dynamic imports for large datasets. 4. Question whether the frontend really needs the full dataset. 5. Consider backend APIs if data is large and rarely needed. Sometimes performance issues aren’t about complex algorithms — they’re about small architectural decisions. Today’s lesson: Every import matters. 🙂 #ReactJS #Vite #WebPerformance #FrontendDevelopment #JavaScript #BuildOptimization
To view or add a comment, sign in
-
-
𝐇𝐨𝐰 𝐭𝐡𝐞 𝐍𝐨𝐝𝐞.𝐣𝐬 𝐄𝐯𝐞𝐧𝐭 𝐋𝐨𝐨𝐩 𝐖𝐨𝐫𝐤𝐬 𝐈𝐧𝐭𝐞𝐫𝐧𝐚𝐥𝐥𝐲 At a high level, the Event Loop is a continuous cycle that checks for tasks, executes callbacks, and waits for new events. But internally, it’s more structured than most people think. It runs in phases, each handling specific types of callbacks: • 𝐓𝐢𝐦𝐞𝐫𝐬 → Executes callbacks from 𝘴𝘦𝘵𝘛𝘪𝘮𝘦𝘰𝘶𝘵() and 𝘴𝘦𝘵𝘐𝘯𝘵𝘦𝘳𝘷𝘢𝘭() whose delay has expired. • 𝐏𝐞𝐧𝐝𝐢𝐧𝐠 𝐂𝐚𝐥𝐥𝐛𝐚𝐜𝐤𝐬 → Runs certain system-level I/O callbacks deferred from the previous loop. • 𝐏𝐨𝐥𝐥 → Retrieves completed I/O events and executes their callbacks; this is where most request handling happens. • 𝐂𝐡𝐞𝐜𝐤 → Executes callbacks scheduled with 𝘴𝘦𝘵𝘐𝘮𝘮𝘦𝘥𝘪𝘢𝘵𝘦(). • 𝐂𝐥𝐨𝐬𝐞 𝐂𝐚𝐥𝐥𝐛𝐚𝐜𝐤𝐬 → Runs cleanup callbacks like socket.on('close'). • 𝐌𝐢𝐜𝐫𝐨𝐭𝐚𝐬𝐤 𝐐𝐮𝐞𝐮𝐞 (𝐛𝐞𝐭𝐰𝐞𝐞𝐧 𝐞𝐯𝐞𝐫𝐲 𝐩𝐡𝐚𝐬𝐞) → Executes 𝘱𝘳𝘰𝘤𝘦𝘴𝘴.𝘯𝘦𝘹𝘵𝘛𝘪𝘤𝘬() and resolved Promises before moving to the next phase. 𝐖𝐡𝐚𝐭 𝐓𝐡𝐢𝐬 𝐌𝐞𝐚𝐧𝐬 𝐏𝐫𝐚𝐜𝐭𝐢𝐜𝐚𝐥𝐥𝐲 • If you run heavy CPU code → the loop can’t move to the next phase • If the poll phase is blocked → no new requests are processed • If you flood microtasks → timers and I/O get delayed • Async I/O doesn’t block — but CPU work does If your Node.js app feels “slow,” measure event loop delay before blaming the database or framework. #NodeJS #JavaScript #BackendDevelopment #V8 #SystemDesign #PerformanceEngineering #EventLoop
To view or add a comment, sign in
-
-
Understanding libuv in Node.js: The Hidden Engine Every Backend Developer Should Master | Skill Boosters — Notes #6 Most developers use Node.js. But very few truly understand what makes it scalable. Node.js is single-threaded. So how does it handle: • Thousands of concurrent users? • Non-blocking file operations? • Async networking? • Timers and background tasks? The answer is simple — but powerful: 👉 libuv Node.js works because: • V8 executes your JavaScript • libuv handles asynchronous I/O • The Event Loop coordinates everything libuv provides: ✔ Thread Pool (default 4 threads) ✔ File system handling ✔ DNS & crypto operations ✔ TCP/HTTP networking ✔ Event loop implementation Once you understand libuv: • The “magic” of Node.js disappears • Performance bottlenecks become easier to debug • Blocking code mistakes reduce • System design decisions improve If you're building APIs, microservices, or high-concurrency backend systems… understanding libuv isn’t optional. Link : https://lnkd.in/duDjvccZ 👇 Let’s discuss. #Nodejs #BackendDevelopment #JavaScript #EventLoop #SystemDesign #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 NestJS Request Lifecycle — What Really Happens to Every Incoming Request? If you’re building APIs with NestJS, understanding the request lifecycle is critical for writing clean authentication, validation, logging, and error-handling logic. 📥 Incoming Request Flow => 1. Middleware The first layer that executes. Used for logging, modifying request objects, parsing tokens, etc. Runs before guards. => 2. Guards Determine whether the request should proceed. Best place for authentication & authorization logic. If a guard returns false, the request stops here. => 3. Interceptors (Before Handler) Interceptors wrap around the route handler. They execute logic before the handler runs (e.g., logging, caching, performance tracking). => 4. Pipes Pipes handle validation and transformation. This is where DTO validation (class-validator) and transformation (class-transformer) happen. If validation fails → an exception is thrown. => 5. Controller → Route Handler Your actual business logic executes here. Services are called. Database operations run. Data is processed. 📤 Outgoing Response Flow => 6. Interceptors (After Handler) Interceptors can transform or format the response before sending it back to the client. Example: wrapping responses in a standard API format. => 7. Exception Filters (If Error Occurs) If any error is thrown in the lifecycle, exception filters catch it and shape the final error response. 💡 Important Detail Developers Miss: Interceptors are executed twice: • Before the handler (request phase) • After the handler (response phase) This makes them extremely powerful for logging, caching, and response mapping. 🔥 Real-World Example: Request → Middleware logs request → Guard validates JWT → Pipe validates DTO → Controller processes logic → Interceptor formats response → Response sent Understanding this flow makes debugging easier, improves architecture decisions, and prevents mixing responsibilities. If you're serious about scalable backend systems, mastering the request lifecycle is non-negotiable. Official docs: https://lnkd.in/gxfvSqyC Are you using global guards and interceptors in your NestJS apps? #nestjs #nodejs #backenddevelopment #javascript #softwareengineering #api #webdevelopment
To view or add a comment, sign in
-
-
After weeks of building, I'm excited to share CodeReview — a self-hosted, real-time collaborative code review platform built from scratch. What it does: Developers can submit code, get instant automated analysis, leave line-by-line comments, and receive live notifications — no polling, no page refreshes. The architecture I'm most proud of: - The backend runs on 5 independent Go microservices — User, Review, Analysis, Notification, and an API Gateway — all communicating asynchronously via RabbitMQ. - When a review is submitted, an event cascades through the pipeline: triggering static analysis (10+ rules including hardcoded secret detection), persisting results, and broadcasting a live update over WebSocket. - Service-to-service communication happens over gRPC, with the API Gateway as the single HTTP/WebSocket entry point. - The frontend is a Vue.js 3 + TypeScript SPA with Pinia, Tailwind CSS v4, and auto-reconnecting WebSocket support. A few deliberate constraints I imposed on myself: No ORM — raw SQL throughout No generated gRPC stubs — manual Protobuf definitions Stateless JWT auth across all services Full Docker Compose setup for MySQL + RabbitMQ This was a genuine deep-dive into distributed systems design, real-time communication, and Go microservices patterns. Building something end-to-end — from auth to event-driven pipelines to a live UI — taught me more than any tutorial could. Check it on: https://lnkd.in/g3SHmfQB #GoLang #Microservices #VueJS #RabbitMQ #gRPC #WebSocket #DistributedSystems #BackendDevelopment
To view or add a comment, sign in
-
-
⏳ JavaScript is about to fix one of its oldest design flaws: time handling. The Temporal API is getting closer to being enabled by default in Node.js. And this isn’t just a syntax improvement — it’s a structural change in how we model time in backend systems. For years, we’ve relied on Date, which is: 🔁 Mutable 🌍 Implicitly timezone-dependent ⚠️ Easy to misuse 🧩 Hard to reason about in distributed systems In production, that leads to: ⏰ DST-related bugs 💳 Incorrect financial calculations 📜 Log inconsistencies 🗓 Scheduling drift 🌐 Cross-region edge cases Temporal introduces: 🧱 Immutable time objects 🌍 Explicit timezones ➕ First-class date/time arithmetic 🧭 Clear separation between absolute and calendar time For backend engineers, this matters more than most language features. Time bugs are expensive. They’re silent. And they surface when it’s already too late. If Temporal becomes the default in Node.js, it won’t just modernize APIs — it will improve reliability at scale. The real question isn’t whether Temporal is better. It’s whether teams are ready to rethink how they model time. https://lnkd.in/emjWFMh7
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
This clears up the “Node is single-threaded” myth really well async doesn’t mean infinite parallelism. Understanding the libuv thread pool explains a lot of those “everything is async but still slow” production issues.