Why Node.js can handle thousands of requests with a single thread 🤔 The answer is not “because it’s fast” — it’s because of how the event loop works. Node.js offloads I/O operations (DB calls, APIs, file reads) to the system and continues processing other requests instead of waiting. But here’s the catch: 👉 If you write CPU-heavy or blocking code, you break this advantage completely. Examples: • Large synchronous loops • Heavy JSON processing • Blocking file operations Solution: • Use async patterns • Offload heavy tasks to workers/queues • Keep the event loop free Key insight: Node.js is powerful — but only if you respect the event loop. #NodeJS #Backend #EventLoop #Performance
Node.js Event Loop: Thousands of Requests with a Single Thread
More Relevant Posts
-
𝗜 𝘁𝗵𝗼𝘂𝗴𝗵𝘁 𝗺𝘆 𝗯𝗮𝗰𝗸𝗲𝗻𝗱 𝘄𝗮𝘀 𝗳𝗮𝘀𝘁. 𝗧𝗵𝗲𝗻 𝗜 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘀𝘁𝗿𝗲𝘀𝘀-𝘁𝗲𝘀𝘁𝗲𝗱 𝗶𝘁. I’ve been building V-Stream, a multi-tenant video engine. Locally, everything felt snappy. But I wanted to see what happens when 100 users hit it at once. Here is what the data showed: 𝗧𝗵𝗲 𝗥𝗲𝗮𝗹𝗶𝘁𝘆 𝗖𝗵𝗲𝗰𝗸: I hooked up 𝗣𝗿𝗼𝗺𝗲𝘁𝗵𝗲𝘂𝘀 and ran 𝗔𝘂𝘁𝗼𝗰𝗮𝗻𝗻𝗼𝗻. Under load, my API latency spiked from a fast 25ms to a massive 𝟴.𝟰 𝘀𝗲𝗰𝗼𝗻𝗱𝘀. 𝗧𝗵𝗲 𝗥𝗼𝗼𝘁 𝗖𝗮𝘂𝘀𝗲: A classic Node.js trap. CPU-heavy tasks like Bcrypt hashing and video analysis were causing 𝗘𝘃𝗲𝗻𝘁 𝗟𝗼𝗼𝗽 𝘀𝗮𝘁𝘂𝗿𝗮𝘁𝗶𝗼𝗻. While one user's video was processing, the single thread was blocked, forcing every other tenant to wait in line. "It works on my machine" doesn't scale. 𝗧𝗵𝗲 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗦𝗵𝗶𝗳𝘁: I realized I couldn't just write better code; I had to completely decouple the system. • Integrated an event-driven background queue using 𝗕𝘂𝗹𝗹𝗠𝗤 𝗮𝗻𝗱 𝗥𝗲𝗱𝗶𝘀. • Spun up standalone 𝗪𝗼𝗿𝗸𝗲𝗿 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗲𝘀 to handle the heavy video analysis. • The main API now just accepts the request, pushes it to the queue, and instantly frees up the thread. 𝗧𝗵𝗲 𝗥𝗲𝘀𝘂𝗹𝘁: Even when the background workers are at 100% capacity, the main API stays incredibly responsive, locking in a stable ~𝟮𝟱𝗺𝘀 𝗹𝗮𝘁𝗲𝗻𝗰𝘆. Decoupling fixed my CPU bottleneck, but introduced distributed network risks. Next up: building 𝗖𝗶𝗿𝗰𝘂𝗶𝘁 𝗕𝗿𝗲𝗮𝗸𝗲𝗿𝘀 to handle API retries gracefully without duplicating data. Check out the full architecture diagram and code here: https://lnkd.in/g_-JZVn8 Has anyone else had to untangle an Event Loop bottleneck recently? #Nodejs #SystemDesign #BackendEngineering #SoftwareEngineering #DistributedSystems
To view or add a comment, sign in
-
-
I thought Node.js could handle high traffic easily… until this happened. I built an API. Flow was simple: 1. Receive request 2. Process data 3. Send response Everything worked fine in testing. ✅ Fast responses. No issues. --- But in production? The server started freezing. - Requests got delayed - Some APIs never responded - CPU usage went high No crashes. Still unusable. --- 💡 THE INVESTIGATION: I checked async code. Everything looked fine. Then I found it… 👉 A heavy synchronous operation. --- ⚠️ THE MISTAKE: const result = heavyComputation(); // blocking 👉 This blocked the event loop. Which meant: - Other requests had to wait - Server handled requests one by one - Performance dropped drastically --- ⚙️ THE FIX: Moved heavy work outside main thread. const result = await runInWorkerThread(); Also: - Used async processing - Optimized heavy logic --- ⚡ THE IMPACT: - Server became responsive again - Requests handled smoothly - Performance improved significantly --- 📌 THE REAL LESSON: Node.js is single-threaded. 👉 If you block the event loop… You block everything. --- 🧠 WHAT I LEARNED: - Avoid CPU-heavy synchronous code - Use worker threads for heavy tasks - Always think about event loop impact --- 👇 Have you ever accidentally blocked the event loop? #nodejs #backend #performance #eventloop #programmin
To view or add a comment, sign in
-
⚠️ Idempotency in Async Systems — What’s Your Approach? Most backend systems rely on async task processing (queues, retries, webhooks)… but here’s the catch: 👉 *Tasks are not guaranteed to run only once.* Retries, duplicate events, network timeouts — all are normal in production. If your system isn’t idempotent: ❌ Duplicate operations ❌ Multiple order creations ❌ Data inconsistency 💡 Common approaches I’ve seen: * Idempotency keys-> unique Id like (requestId / transactionId) * Caching the async task/ DB constraints * Deduplication tables or event logs * Upserts instead of inserts * Designing APIs safe for retries But every system has its own challenges… 👉 How do YOU handle idempotency in async workflows? Do you rely on DB constraints, caching, or something else? Would love to hear real-world strategies from others 👇 #BackendEngineering #SystemDesign #DistributedSystems #NodeJS #TechDiscussion
To view or add a comment, sign in
-
Node.js performance issues rarely come from “slow logic”. They come from blocked event loops. A subtle mistake: app.get("/report", async (req, res) => { const data = heavyComputation(); // CPU-heavy res.json(data); }); Looks fine. But under load: • requests queue up • latency spikes hard • CPU hits 100% on a single thread Why? Because Node isn’t slow. It’s single-threaded where it matters. Experienced systems offload work: const { Worker } = require("worker_threads"); Or: • move heavy tasks to queues (Bull / RabbitMQ) • cache computed results aggressively • stream instead of blocking The key shift → Don’t just write async code. Protect the event loop at all costs. Fast APIs aren’t about speed. They’re about not blocking everyone else. #NodeJS #BackendEngineering #PerformanceOptimization #SystemDesign #SoftwareArchitecture
To view or add a comment, sign in
-
I spent 6+ hours debugging a production issue… and the scary part? It wasn’t a bug in my code. 😅 My Node.js API suddenly became slow under load. 📉 What I observed: → Response time jumped from 200ms → 3s → CPU usage was completely normal 🤯 → Logs showed… nothing At first, I assumed the usual suspects: → Database bottleneck → Network latency → Inefficient queries But none of them were the problem. 👉 The real issue: **Connection Pool Exhaustion** I wasn’t releasing DB connections properly. Under load: → All connections got occupied → Incoming requests were stuck waiting → System looked “slow”… not “broken” That’s what made it tricky. 💡 What I fixed: → Ensured every connection is released after use → Added monitoring on connection pool limits → Implemented timeouts + retry strategy 💭 What this taught me: Not every issue throws errors. Not every failure crashes your system. Sometimes your system is: 👉 Alive 👉 Healthy-looking 👉 But silently waiting And that’s even more dangerous. 🚨 New rule I follow: Before blaming code, always check: → Connection pools → Memory usage → Event loop delays Because performance bugs don’t shout… they whisper. Have you ever debugged something that *looked fine* but wasn’t? 👀 #backend #nodejs #systemdesign #webdevelopment #performance #mern #debugging
To view or add a comment, sign in
-
We reduced API response time from 3 minutes to 3 seconds. 🚀 Not by adding more servers. Not by rewriting everything. Here's what actually changed: 𝟭. Identified the bottleneck first Most engineers jump to solutions. We profiled the system and found 80% of the time was spent in sequential DB calls that could run in parallel. 𝟮. Async job handling > synchronous blocking Moved heavy operations off the request lifecycle. Users got instant responses; heavy lifting happened in the background. 𝟯. Smart caching at the right layer Not everything needs to hit the DB. Caching frequently-read data at the service layer cut redundant queries drastically. 𝟰. DB query optimization N+1 queries are silent killers. One joined query replaced dozens of round trips. The lesson? System design isn't about fancy architecture on day one. It's about understanding WHERE your system hurts — and fixing that precisely. Bad performance is always a design decision someone made (or skipped). --- What's the biggest performance win you've shipped? Drop it below 👇 #SystemDesign #BackendEngineering #SoftwareEngineering #FullStack #NodeJS #WebPerformance
To view or add a comment, sign in
-
You think your code is async… but your API is still slow. That usually means one thing: Something is blocking the event loop. const result = await fetchData(); process(result); The API call is async. But "process(result)" might not be. If it’s CPU-heavy, it blocks other requests from executing. So everything looks correct in code… but performance drops under load. Async helps with I/O. It doesn’t protect you from CPU work. #NodeJS #AsyncProgramming #EventLoop #BackendEngineering #PerformanceOptimization
To view or add a comment, sign in
-
-
Presenting a pattern today, PubSub, in most application without a framework that already implement it or a library like rxjs with observable or an event system you will reach a point where you want custom event to not have to query data all the time between class. Pub sub come into play as a pattern that will handle the one that want the data (subscriber) and when the data is updated you publish it. Usually you want it to be a singleton to reduce the risk of 2 pubSub when the codebase get too big, but for the sake of this example I sticked to a single pattern in the code. You can do it more complex, enforce type, create an authorized list of event etc.
To view or add a comment, sign in
-
-
Day 35/40 — The day I learned to read logs properly 🔍 No new features today. Just me, Docker logs, and a bug that only appeared the second time you did something. First run → everything works perfectly Stop → works Start again → broken That's the worst kind of bug. The kind that makes you question if you even understand your own code. What was actually happening: Mobile networks are not stable. Socket.io reconnects with a brand new socket ID. Your old socket — the one that joined the room — is gone. The new socket knows nothing. So when User A stopped and restarted, they were emitting into a room they were never actually in. Three fixes across backend and frontend: → socket.leave() on stop (was missing — room membership leaked) → Redis storing active session so new sockets can restore their room on reconnect → Frontend re-emitting start_location_share after every socket reconnect using Redux state The bug wasn't in the feature. It was in the assumption that the socket connection is permanent on mobile. It isn't. Lesson from today: Before touching any code, add logs that show you exactly what rooms each socket is in before and after every event. I spent hours assuming the problem was logic. It was infrastructure. Debugging is just making the invisible visible. Day 36 tomorrow — finishing location, then notifications, then message status. The finish line is close. #buildinpublic #reactnative #nodejs #socketio #30daybuild #debugging #mobiledev
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development