Node.js event loop: The thing nobody explains properly. After 2 years of writing Node.js, I finally understand why my API was slow. The problem? I was blocking the event loop without knowing it. 🔄 How Node.js actually works: 1. Event Loop (single-threaded) → Handles I/O operations → Non-blocking by default → Can process thousands of requests 2. Worker Pool (multi-threaded) → Handles CPU-intensive tasks → File system operations → Crypto operations ⚠️ What blocks the event loop: ❌ Synchronous operations: - JSON.parse() on huge payloads - Crypto.pbkdf2Sync() - Heavy regex operations - Large loops (1M+ iterations) ✅ What doesn't block: - Database queries (async I/O) - HTTP requests (async I/O) - File reads with fs.promises - setTimeout/setInterval ✅Fix:move heavy work to: - Worker threads - Child processes - External queue systems Lesson: Node.js is fast for I/O, not CPU work. What's your biggest Node.js performance lesson? Code is attached as a screenshot for readability. What’s your biggest Node.js performance lesson? 👇 #SoftwareDevelopment #JavaScript #NodeJS #EventLoop #SoftwareEngineering #Performance #Programming #BackendDevelopment #Coding #Async #WorkerThreads
Node.js Performance: Avoid Blocking Event Loop with Worker Threads
More Relevant Posts
-
For years, the gold standard for Node.js performance was simply "don't block the event loop." While that’s still core, the modern Node.js ecosystem has evolved far beyond that. If you’re building high-scale applications today, you should be looking at these three areas: * Worker Threads for CPU-Heavy Tasks: We no longer have to offload everything to a Python or Go microservice. For heavy data processing or image manipulation, worker_threads allow us to utilize multi-core systems effectively within the same codebase. * Built-in Test Runners & Watch Mode: Say goodbye to the overhead of external dependencies for basic development cycles. The native support for testing and --watch mode has made the developer experience leaner and faster. * The Rise of Alternative Runtimes: Whether you’re sticking with Node or eyeing Bun/Deno, the competition has pushed Node.js to implement massive performance gains in ESM loading.
To view or add a comment, sign in
-
𝐀𝐫𝐞 𝐲𝐨𝐮 𝐬𝐭𝐢𝐥𝐥 𝐥𝐞𝐭𝐭𝐢𝐧𝐠 𝐓𝐲𝐩𝐞𝐒𝐜𝐫𝐢𝐩𝐭 𝐝𝐞𝐟𝐚𝐮𝐥𝐭 𝐭𝐨 `any` 𝐰𝐡𝐞𝐧 𝐚𝐜𝐜𝐞𝐬𝐬𝐢𝐧𝐠 𝐨𝐛𝐣𝐞𝐜𝐭 𝐩𝐫𝐨𝐩𝐞𝐫𝐭𝐢𝐞𝐬 𝐝𝐲𝐧𝐚𝐦𝐢𝐜𝐚𝐥𝐥𝐲? 𝐓𝐡𝐞𝐫𝐞'𝐬 𝐚 𝐛𝐞𝐭𝐭𝐞𝐫 𝐰𝐚𝐲 𝐭𝐨 𝐬𝐭𝐚𝐲 𝐭𝐲𝐩𝐞-𝐬𝐚𝐟𝐞. One common challenge in TS is creating generic functions that can access properties of an object without sacrificing compile-time type safety. Many resort to `any` or complex overloads, losing the benefits of TypeScript. The trick is combining Generics, `keyof`, and `extends` to tell the compiler exactly what to expect. Here’s a simple pattern for a type-safe `getProperty` function: ```typescript function getProperty<T, K extends keyof T>(obj: T, key: K): T[K] { return obj[key]; } interface User { id: number; name: string; email: string; } const user: User = { id: 1, name: 'Alice', email: 'alice@example.com' }; const userName = getProperty(user, 'name'); // Type of userName is 'string' - inferred correctly! const userId = getProperty(user, 'id'); // Type of userId is 'number' - perfect! // getProperty(user, 'address'); // Compiler error! 'address' is not assignable to type 'keyof User'. This is exactly what we want. ``` This pattern ensures that `key` is always a valid property of `T`, and the return type is correctly inferred as `T[K]`. No more runtime surprises or `any` casts! It's clean, reusable, and powerfully type-safe. What's your go-to TypeScript trick for maintaining type safety with dynamic data? Share in the comments! #TypeScript #FrontendDevelopment #SoftwareEngineering #React #WebDev
To view or add a comment, sign in
-
Recently I was exploring something while working with APIs. Why do almost all modern APIs send data in JSON format? At first I thought it was just a random standard developers follow, but the more I explored it, the more it made sense. Whenever a server sends data to the frontend, it needs a format that both machines can easily understand and that’s where JSON (JavaScript Object Notation) comes in. A typical API response might look like this: { "name": "John", "email": "john@email.com", "role": "admin" } Just simple key → value pairs. And that simplicity is exactly why JSON became so popular. ✔ Easy for humans to read ✔ Easy for machines to parse ✔ Lightweight and fast ✔ Works with almost every programming language That’s why most REST APIs send responses in JSON. #WebDevelopment #BackendDevelopment #APIs #JSON #NodeJS #FullStackDevelopment #SoftwareEngineering #LearnInPublic
To view or add a comment, sign in
-
-
🚀 TypeScript is about to get 10x faster — and the last JS-based version just dropped its RC Big news this week for every TypeScript developer. 👀 TypeScript 6.0 RC landed on March 6. GA drops on March 17. But here's the real story - this is the last TypeScript version written in TypeScript. After 6.0, everything changes. 🔥 What's happening? Microsoft is rewriting the entire TypeScript compiler in Go (yes, Go 🐹) with promises: ~10x faster builds Near-instant incremental compilation ~50% memory reduction Much faster editor startup What's new in 6.0 itself? RegExp.escape - finally, safe regex escaping built-in New Temporal API types (ES2026 is coming 🎉) getOrInsert / getOrInsertComputed for Map & WeakMap Subpath imports starting with #/ asserts keyword deprecated → use with instead Better type inference for generic function expressions What should you do now? Install the RC → npm install -D typescript@rc Run with --deprecation flag to catch anything that'll break in 7.0 Start thinking about your build pipeline - things will change 6.0 is the bridge. 7.0 is the destination. The TypeScript team has been quietly heads-down on this rewrite for over a year. The speed gains are real - already tested on large codebases. Are you excited about the Go-powered future of TypeScript, or does rewriting a compiler in a different language make you nervous? 👇 #TypeScript #JavaScript #WebDevelopment #Frontend #SoftwareEngineering #DeveloperTools #NodeJS #Programming #TechNews #OpenSource
To view or add a comment, sign in
-
-
I’ve spent the last few days getting my hands dirty with the Node.js fs module. Understanding how to interact with the server's file system is a game-changer for building scalable backend applications. In this deep dive, I explored: Asynchronous vs. Synchronous: When to use readFile vs readFileSync (and why blocking the main thread is a big no-no!). CRUD Operations: Writing (writeFile), Reading (readFile), and Updating (appendFile) data. File Management: Deleting files with unlink and organizing directories with mkdir. Check out my code snippets and experiments here: 🔗 GitHub:https://lnkd.in/dpSXCNxu #NodeJS #WebDevelopment #Backend #CodingJourney #Javascript #SoftwareEngineering
To view or add a comment, sign in
-
-
Most developers think Dependency Injection in NestJS is just framework magic. It’s not. It’s architectural discipline. On the left: Control is tight. Everything creates everything. On the right: Responsibilities are clear. Dependencies are declared. The container orchestrates. That shift — from creating dependencies to declaring dependencies — is where clean architecture begins. The real question is: If your database changes tomorrow… does your business logic panic? Or does it calmly accept a new provider? Dependency Injection isn’t about syntax. It’s about designing for change. #NestJS #BackendDevelopment #CleanCode #SoftwareArchitecture #DependencyInjection
To view or add a comment, sign in
-
-
Same pattern, three languages. I've been implementing the same pattern across Go, C# and Rust. A worker that reads tasks from a channel, executes them, and forwards results. The differences are telling. Go gives you CSP for free. `select` over a context and a channel is idiomatic, readable, and nearly impossible to get wrong. The runtime handles the scheduling. You just describe intent. C# gets you there with `System.Threading.Channels` and `async/await`, but the ceremony is real. Two nested loops (`WaitToReadAsync` + `TryRead`) to drain efficiently, `OperationCanceledException` as your cancellation signal, and you're manually bridging the gap between cooperative cancellation and channel semantics. Solid, but verbose. Rust is the most honest of the three. `tokio::select!` mirrors Go's `select`, but the type system makes implicit costs explicit (boxed futures, pinned trait objects, `Send` bounds). `TaskCallback` alone is a wall of angle brackets. You're not fighting the language; you're reading a contract in full. Three observations after writing this: - Go wins on readability. The pattern fits in your head. - C# wins on ecosystem integration. Structured concurrency and Azure SDK fit naturally around this. - Rust wins on correctness guarantees. The compiler rejects entire classes of bugs the other two defer to runtime or convention. There's no universal winner. Pick based on where your failure modes live: - Throughput + simplicity -> Go - Enterprise integration -> C# - Embedded/systems/safety -> Rust #softwaredevelopment #golang #csharp #rust #concurrency #systemsdesign
To view or add a comment, sign in
-
TypeScript just rewrote its compiler in Go. A 40-second build now takes 4 seconds. In March 2025, Anders Hejlsberg announced the TypeScript team is porting tsc to native Go. The project — tsgo — delivers up to 10× faster type-checking on real codebases. Most developers don't realize what happens when they run tsc. The compiler re-parses the entire project in a single JavaScript thread. No parallelism. No native speed. Just V8 doing its best with a million-line compiler codebase. // Current tsc — single-threaded, JIT-compiled $ time tsc --noEmit // 650-file monorepo: 39.6 seconds // Cannot parallelize type resolution The Go port changes the equation: // tsgo — multi-threaded, natively compiled $ time tsgo --noEmit // Same monorepo: 17.5s cold, 1.3s warm // Parallel parsing across all CPU cores Why Go instead of Rust? TypeScript's compiler relies on deeply shared mutable data structures with circular references. Rust's ownership model would have forced a full architecture redesign. Go's garbage collector handles this naturally — maximum speed with minimum risk. A developer benchmarking tsgo on a 650-file SvelteKit monorepo measured: • Cold check: 17.5s vs 39.6s (2.3× faster) • Warm incremental: 1.3s vs 39.4s (30× faster) • Iterative rebuild: 2.5s vs 39.8s (16× faster) When this doesn't apply: • Projects under 50 files won't feel a meaningful difference • Language Server integration follows a separate migration timeline • Pipelines bottlenecked by I/O or bundling won't see the full 10× gain Microsoft has tested tsgo internally, and the preview is already available. Early ecosystem tools like svelte-check-rs are building directly on the native compiler. This isn't an experiment. The TypeScript compiler will ship as a native Go binary. What's your biggest tsc pain point — type-checking speed, editor lag, or CI build times? #TypeScript #WebDev #DeveloperTools #Performance
To view or add a comment, sign in
-
-
⚠️ Your Express route is freezing every user's request — and you might not even know it. Node.js runs on a single thread. One heavy synchronous operation in a route = the entire event loop blocked for ALL concurrent users. Here's how to fix it: 🔵 Worker Threads → Offload CPU-heavy tasks (parsing, encryption, ML inference) → Runs parallel to the main thread, shared memory → Use a thread pool for high-traffic routes 🔵 Child Processes → For isolated or legacy work (Python scripts, shell commands) → Full process isolation — crashes won't affect your server 🔵 setImmediate Chunking → Break large array processing into smaller chunks → Yields back to the event loop between each chunk → No extra dependencies needed 🔵 Job Queues (Bull / BullMQ) → For anything that takes more than 1–2 seconds → Respond instantly with a jobId, process in the background → Built-in retries, priority, scheduling 🔵 Streaming → For large datasets — never buffer everything in memory → Pipe DB/file streams directly to the response 💡 The rule that saves you: If it takes more than a few milliseconds of CPU time → get it OFF the main thread. And remember: ✅ I/O work (DB queries, file reads) = just use async/await, Node handles it natively ❌ CPU work (computation, transforms) = needs a worker or queue Which of these have you used in production? 👇 #NodeJS #ExpressJS #BackendDevelopment #JavaScript #SoftwareEngineering #WebDevelopment
To view or add a comment, sign in
-
-
Rebuilt wget from scratch in Go. Not a wrapper — raw net/http, goroutines, and recursive HTML parsing with golang.org/x/net/html. What I implemented: -B → background mode, stdout piped to a log file -O / -P → custom filename and save path --rate-limit → bandwidth throttle with k / M suffix support -i → concurrent batch downloads via goroutines --mirror → recursive site crawler, parses a[href] link[href] img[src], rebuilds the full directory structure under the domain folder --convert-links → rewrites all asset URLs for offline use -R / -X → reject file types or exclude paths during crawl Progress bar outputs transferred size, %, speed, and ETA — same UX as the original binary. Tools you use every day are more complex than they look. Building one from zero is the fastest way to find out. #golang #systems #cli #http #concurrency #01edu #zone01
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development