How JavaScript's Event Loop Works and Async Code Best Practices

Most JavaScript developers use asynchronous code every day. Far fewer can tell you why it works. Because I’ve been interested in this recently, here’s a short post about what’s actually happening, entirely written by a human (me)! JavaScript is single-threaded. That means one call stack, one thing happening at a time. So how does it handle thousands of concurrent network requests, timers, and user events without grinding to a halt? The answer isn’t multithreading, it’s actually the event loop! When you call a function, it gets pushed onto the call stack. When it returns, it gets popped off. Synchronous code is straightforward: each frame executes, resolves, and clears. The problem is that some operations, fetching data, reading files, waiting on a timer, take time. If JavaScript blocked the call stack waiting for them, your entire UI would freeze. So instead, it delegates. When you call setTimeout or fetch, the browser hands that work off to a Web API running outside the JavaScript engine entirely. Your code keeps executing. When the Web API finishes, the timer fires, the response arrives, it doesn’t interrupt whatever is currently running. Instead, it pushes a callback onto the task queue and waits. The event loop has one job: check whether the call stack is empty. If it is, it picks the next callback off the task queue and pushes it onto the stack. That’s it. That’s the whole mechanism. Promises don’t use the task queue. They use the microtask queue, which has higher priority. After every task completes, the event loop drains the entire microtask queue before picking up the next task. Every resolved Promise, every awaited expression, every .then() callback: microtask queue. This is why the following output might surprise you: console.log('start'); setTimeout(() => console.log('timeout'), 0); Promise.resolve().then(() => console.log('promise')); console.log('end'); // Output: // start // end // promise // timeout The setTimeout fires with a 0ms delay, but it still hits the task queue. The Promise resolves synchronously but its callback hits the microtask queue. The microtask queue always wins. What does this mean in practice? Understanding this model changes how you write async code. Unintentionally flooding the microtask queue with chained Promises can starve the task queue and delay rendering. Long synchronous operations on the call stack block everything, regardless of how much async code surrounds them. And if you’ve ever wondered why two awaited calls run sequentially when they could run in parallel, that’s a call stack problem, solved cleanly with Promise.all. If this was useful, I write about TypeScript, system design, and software engineering here. I’m always down to connect!

To view or add a comment, sign in

Explore content categories