Dan Neciu’s Post

😍 I love Promise.all() It's my go-to move to speed things up. Most teams reach for it to “parallelize” work — fetch 10 users, 50 files, 200 URLs — but they forget one key detail: Promise.all() fails fast. If a single promise fails, the entire batch fails. You lose all successful results, and you still waste time running the rest in the background. That’s fine for critical pipelines where all results must succeed. But for real-world APIs — where partial success is normal — it’s fragile. 💡 That’s where Promise.allSettled changes the equation. It waits for every promise to finish, successful or not, and gives you the whole picture. You can log errors, retry failed calls, and still use the successful data. Check out an amazing article in the comments from Matt Smith 👇 For high-volume workloads, Matt shows another middle ground: controlled concurrency. By batching promises with utilities like p-limit, you can run a few at a time — fast, but safe. The right async pattern depends on your tolerance for failure and your need for speed. Most of the time, “fail fast” isn’t the goal — “finish reliably” is. #JavaScript #Async #WebDev #Frontend #Promises

  • text

Great post! I've seen Promise.allSettled() rescue countless developers who wasted hours debugging, unable to understand why things weren't working as expected when running await inside a for loop. The moment they switch to Promise.all or Promise.allSettled() and see they can handle partial failures while still benefiting from parallel execution is always an eye-opener. It's one of those patterns that should be taught much earlier in the learning curve.

Love this breakdown — I’ve seen so many bugs caused by people forgetting that Promise.all() bails on the first failure. 😅 allSettled() and controlled concurrency are such underrated tools for real-world reliability.

Great point. I see Promise.all used everywhere. Often without realizing how fragile it can be in production. allSettled (and controlled concurrency) strike a much better balance between speed and resilience. In most systems, reliability is the real bottleneck, not performance.

Like
Reply

This is a good post. I would say, though, that the behaviour of Promise.all often *is* what you want, a partial success can be difficult to handle in a sensible and non misleading way, so it's often better to just fail and be done with it. If your promises are fetch requests then you might want to have retry behaviour inside the promise to improve resilience to random failures.

Love this breakdown, it’s a great reminder that reliability often beats raw speed in real-world async work.

Like
Reply

The limit for running processes is really crucial when working with a DB since the number of DB connections is limited. The p-limit could solve that with ease. Dan Neciu

Im amazed at how many still think you can actually fetch 200 URLs in parallel in the browser. The max concurrent limit is 6, making Promise.all*() effectively 33 for awaits. Built backends that remove the need for concurrent requests and waterfalls. The days of download speed being the bottleneck are gone, TTFB is the new enemy

Ionut-Cristian Florescu

Fractional CTO & Sweat-Equity Partner @ LeasingSH.ro • Ex-Allianz • Creator of Mantine DataTable, tRPC-SvelteKit & other OSS • TypeScript • React • Next.js • Node.js • Full-Stack • Front-End

6mo

...or you can even roll your own custom parallel processing utility, implementing retries, jittering, real-time progress and error reporting, etc. :-)

  • No alternative text description for this image

Quick TL;DR for anyone skimming: Promise.all → when everything must succeed (fail fast). Promise.allSettled → when you want all results, good or bad. Promise.any → when you only need one success (ignore rejections).

See more comments

To view or add a comment, sign in

Explore content categories