💛 Promise APIs in JavaScript — all, race, any, allSettled (Deep Dive + Real Use Cases) ⚡ Once you learn Promises & async/await, the next superpower is mastering Promise APIs. ♦️ 1️⃣ Promise.all() — “All or Nothing” 🤝 📌 Definition Waits for all promises to succeed. If one fails → everything fails immediately. Example Promise.all([ fetchUser(), fetchOrders(), fetchCart() ]) .then(([user, orders, cart]) => { console.log(user, orders, cart); }) .catch(err => console.error(err)); 🔥 Internals ▪️ Starts all promises in parallel ▪️ Maintains results array ▪️ Rejects instantly on first failure ✅ Use Cases ✔️ Dashboard data✔️ Multiple API calls✔️ Page load dependencies 👉 “I need EVERYTHING or nothing” ♦️ 2️⃣ Promise.race() — “Fastest Wins” 🏁 📌 Definition Returns first settled promise (resolve OR reject) Example Promise.race([ fetchFromServer1(), fetchFromServer2() ]) .then(console.log); 🔥 Internals ▪️ First promise to settle decides result ▪️ Others are ignored ✅ Use Cases✔️ Timeout logic✔️ Fastest server response✔️ Fallback APIs Promise.race([ fetchData(), timeout(3000) ]); 👉 “Whoever finishes first, I’m good” ♦️ 3️⃣ Promise.any() — “First Success Wins” 🟢 📌 Definition Returns first fulfilled promise only Ignores rejections ❗ Rejects only if ALL fail Example Promise.any([ fetch(server1), fetch(server2), fetch(server3) ]) .then(console.log) .catch(err => console.log("All failed")); 🔥 Internals ▪️ Tracks failures ▪️ Stops at first success ▪️ Throws AggregateError if all fail ✅ Use Cases✔️ Multiple mirrors/CDNs✔️ Backup services✔️ Redundancy systems 👉 “Give me ANY success, I don’t care which” ♦️ 4️⃣ Promise.allSettled() — “Tell Me Everything” 📊 📌 Definition Waits for all promises Never rejects Returns status of each promise Example Promise.allSettled([ fetchUser(), fetchOrders(), fetchAds() ]) .then(results => console.log(results)); Output: [ { status: "fulfilled", value: ... }, { status: "rejected", reason: ... } ] 🔥 Internals ▪️ Waits for all ▪️ Wraps each result with status ▪️ Never throws ✅ Use Cases:✔️ Analytics calls✔️ Optional features✔️ Logging systems ✔️ Showing partial data 👉 “I want results, success OR failure” ♦️ Behind the Scenes (Interview Gold) 🔍 All these APIs: ✔️ Run promises concurrently ✔️ Use microtask queue ✔️ Return a new promise ✔️ Track internal states They’re basically smart promise coordinators. 🧠 Mental Model 👉 all → teamwork 👉 race → speed 👉 any → success-first 👉 allSettled → reporting 🥇 Interview One-Liner Promise APIs coordinate multiple asynchronous operations concurrently — all waits for all success, race returns the first settled, any returns first success, and allSettled returns every result without failing. More Detailed Explanation Please Visit👉https://lnkd.in/gxhmATdr If this helped, drop a 💛 or share 🔁 Next deep dive 👉 "this" Keyword #JavaScript #JSInternals #LearnJavaScript #WebDevelopment #ProgrammingConcepts #WebDevJourney #BuildInPublic
Mastering Promise APIs in JavaScript: all, race, any, allSettled
More Relevant Posts
-
🚀 How APIs and Promises Work Together in JavaScript When building full-stack applications with React and Express, one concept becomes very clear: 👉 APIs and Promises are deeply connected. It takes time to: • Send the request • Reach the server • Return a response That delay is called an asynchronous operation. And this is where Promises come in. A Promise has three states available: • resolved •pending • rejected When we call an API using fetch() or axios, it returns a Promise. const getUser=async()=>await fetch("/api/users") .then(response => response.json()) .then(data => console.log(data)) .catch(error => console.error(error));(“/api/user”).then(response=>response.json()).then(data=>console.log(data)).catch(error =>console.log(error)) If you would like to wrap it manual Promise const getUser=()=>{ return new Promise((resolve,reject)=>fetch(“/api/users”).then(response =>response.json()).then(data=>resolve(data)).catch(error =>reject(error))) } getUser().then(data=>console.log(data)).catch(err=>console.log(err)) 🚀 Another Real-World Example of Promises in API Design When working with multiple APIs, we often don’t just call one endpoint. We coordinate multiple asynchronous operations. For example: 👉 Fetch user 👉 Fetch user’s orders 👉 Fetch payment details All asynchronously. 🔹 Running API Calls in Parallel with Promise.all() Instead of waiting for each request one by one: const getDashboardData = async (userId) => { try { const [user, orders, payments] = await Promise.all([ fetch(`/api/users/${userId}`).then(res => res.json()), fetch(`/api/orders/${userId}`).then(res => res.json()), fetch(`/api/payments/${userId}`).then(res => res.json()) ]); return { user, orders, payments }; } catch (error) { console.error("Failed to load dashboard data:", error); } }; The above is parallel Promises, where each promise doesn’t depend on each other runs concurrently. When building uh applications in Node.js, understanding how and when to run Promises in parallel vs sequentially makes a measurable performance difference. APIs deliver data. Promises control execution strategy.
To view or add a comment, sign in
-
🚀 Node.js Isn’t Single-Threaded (And Most Developers Still Get This Wrong) When I started learning Node.js, I kept hearing: “Node.js is single-threaded.” That statement is true… but also misleading. After working more deeply with backend systems, I realized something important: 👉 Node.js is single-threaded for JavaScript execution — but not for handling work. Let’s break this down. 1. The JavaScript Thread (Yes, Single-Threaded) Node.js runs JavaScript on a single main thread using the V8 engine. That means: One call stack One task executed at a time No parallel JS execution (unless you use Worker Threads) But then… How does Node handle thousands of requests simultaneously? 2. The Secret: Event Loop + libuv Node.js uses: Event Loop libuv (C++ library) OS-level async capabilities Thread pool (4 threads by default) This is where the magic happens. Example: const fs = require("fs"); console.log("Start"); fs.readFile("file.txt", "utf8", (err, data) => { console.log("File read complete"); }); console.log("End"); Output: Start End File read complete Why? Because fs.readFile() is delegated to libuv, not executed on the main thread. 3. How the Event Loop Actually Works The Event Loop has phases: Timers (setTimeout, setInterval) Pending callbacks Idle/prepare Poll Check (setImmediate) Close callbacks And then there’s: process.nextTick() queue Microtask queue (Promises) Important: 👉 process.nextTick() runs before every phase 👉 Promise microtasks run after each phase This is why understanding event loop order is critical for backend interviews. 4. The Real Danger: Blocking the Event Loop If you do this: while(true) {} You freeze everything. Why? Because the main thread is blocked. No callbacks. No I/O. No requests processed. This is why Node is excellent for: ✅ I/O-heavy apps ❌ CPU-heavy tasks For CPU-intensive work, use: Worker Threads Child Processes Or move heavy work to another service 5. Why Node.js Scales So Well Node doesn’t create a thread per request (like traditional servers). Instead: One event loop Non-blocking I/O Handles thousands of concurrent connections This makes it perfect for: APIs Real-time apps Streaming services Chat systems Final Conclusion Node.js is not “single-threaded” in the way people think. It is: Single-threaded for JavaScript execution Multi-threaded under the hood for I/O handling And that architectural design is what makes it powerful. If you're preparing for backend interviews, truly understanding the event loop is a game-changer. If this helped you, feel free to connect. Let’s grow together #NodeJS #Backend #JavaScript #BackendDev #LearningNodeJS
To view or add a comment, sign in
-
⚠️ If you've ever spent 𝟯𝟬 𝗺𝗶𝗻𝘂𝘁𝗲𝘀 staring at a blank screen — 𝗻𝗼 𝗲𝗿𝗿𝗼𝗿𝘀, 𝗻𝗼 𝘀𝘁𝗮𝗰𝗸 𝘁𝗿𝗮𝗰𝗲, absolutely nothing in the console — only to discover a silent API 500 or an undefined buried three levels deep... this one's for you. 𝗗𝗲𝘃𝗟𝗲𝗻𝘀 is not Sentry. It's not Datadog. It's a dev-time SDK (~20KB, zero dependencies) that catches every failure JavaScript deliberately hides from you: APIs returning 500 with no rejection, user.profile.settings being undefined without crashing — your app just... goes blank. 𝗪𝗵𝗮𝘁 𝘀𝘁𝗼𝗼𝗱 𝗼𝘂𝘁 𝘁𝗼 𝗺𝗲: • Zero config — 5 lines of code, done. • Shadow DOM UI panel — never conflicts with your app's CSS. • Production-safe — auto tree-shakes to zero in production builds. • ES6 Proxy tracking — pinpoints exactly which property is null and where in the chain. The simplest way to think about it: ESLint, but for runtime errors. If it saves you one 30-minute debugging session, it's already paid for itself. And trust me — it'll save you a lot more than one. 📖 Full read: https://lnkd.in/gtsmsczU #JavaScript #Debugging #OpenSource #DevTools #React #WebDevelopment
To view or add a comment, sign in
-
Just dropped my 2nd blog in my JS Unlocked series! 🚀 This time — Variables & Data Types in JavaScript 👇 I'm now going deeper into JavaScript fundamentals as part of Web Dev Cohort 2026. In this one I cover: ✅ What variables are (with a real-life box analogy) ✅ var vs let vs const — when to use what ✅ All 5 primitive data types with real examples ✅ Scope explained simply — no jargon ✅ A hands-on challenge to test yourself If you're just starting out with JavaScript, this one's for you 🙏 🔗 https://lnkd.in/dEkjzfMq Thanks to #HiteshChoudhary Sir, #PiyushGarg and #AkashKadlag for building this cohort 💛 #JavaScript #WebDevelopment #Hashnode #WebDevCohort2026 #LearningInPublic #Frontend #JS
To view or add a comment, sign in
-
🚀 React 19: RIP forwardRef! If you’ve ever struggled with the awkward syntax of forwardRef, I have great news. React 19 has officially simplified how we handle Refs. Now, ref is just a standard prop. No more wrapping your components in Higher-Order Components just to access a DOM node. 🛑 The "Old" Way (React 18) You had to wrap your entire component, which messed up your component definition and made TypeScript types a nightmare to write. JavaScript // Complex and boilerplate-heavy const MyInput = forwardRef((props, ref) => { return <input {...props} ref={ref} />; }); ✨ The React 19 Way (Clean & Simple) Just destructure ref from your props like any other value. It’s cleaner, more readable, and much more intuitive. JavaScript // Just a normal functional component! function MyInput({ label, ref }) { return ( <label> {label} <input ref={ref} /> </label> ); } 📊 Quick Comparison: Why this matters FeatureReact 18 (Old)React 19 (New)Passing RefsforwardRef((props, ref) => ...)Standard prop: ({ ref }) => ...DXHigh friction / boilerplateZero friction / NaturalTypeScriptforwardRef<HTMLButtonElement, Props>interface Props { ref: Ref<HTMLButtonElement> }🔥 Bonus Feature: Ref Cleanup Functions React 19 also solved a long-standing issue with third-party libraries (like D3, Google Maps, or Canvas). You can now return a cleanup function directly from a ref callback! JavaScript <div ref={(node) => { if (node) { console.log("DOM Node added"); // Initialize your 3rd party library here } return () => { console.log("Cleanup time!"); // Destroy library instance to prevent memory leaks }; }} /> 💡 Why we love this: Less Boilerplate: Your component tree stays flat and readable. Better TypeScript Support: No more guessing where to put the Generic types. Memory Safety: Native cleanup for DOM-attached libraries without needing an extra useEffect. React 19 is clearly focusing on making the "Developer Experience" as smooth as possible. Are you still using forwardRef, or are you ready to refactor? 👇 #ReactJS #React19 #WebDevelopment #Frontend #JavaScript #CleanCode #ProgrammingTips #SoftwareEngineering
To view or add a comment, sign in
-
-
Here is a draft for your LinkedIn post, kept under 2000 characters. * Angular Signals vs. RxJS: Coordinating Async Events 🚦 You need to trigger an HTTP request only when three independent events have all occurred. In modern Angular (16+), do you reach for Signals or stay with RxJS? Here is a technical comparison of the three architectural approaches to help you decide. 1. Signals with `computed()` `computed()` is excellent for derived state, but it is passive. * Pattern: `const isReady = computed(() => evtA() && evtB() && evtC());` * Trade-off: This derives a synchronous boolean but does *not* trigger the side effect (HTTP). You still need a watcher (like an effect) to react to `isReady()`. * Verdict: ❌ Insufficient on its own. 2. Signals with `effect()` You can imperatively trigger the request inside an effect. * Pattern: ```typescript effect((onCleanup) => { if (evtA() && evtB() && evtC()) { const sub = http.get(...).subscribe(); onCleanup(() => sub.unsubscribe()); } }); ``` * Trade-off: While possible, this requires manual subscription management and risks race conditions. It moves away from declarative coding and is often flagged as an anti-pattern for state propagation. * Verdict: ⚠️ Risky. Harder to maintain and debug. 3. RxJS `combineLatest` (The Winner) RxJS is designed specifically for asynchronous stream composition. * Pattern: ```typescript combineLatest([evtA$, evtB$, evtC$]).pipe( filter(([a, b, c]) => !!a && !!b && !!c), switchMap(() => http.get(...)) ); ``` * Trade-off: Requires understanding operators, but offers native cancellation (`switchMap`) and precise coordination. * Verdict: ✅ Best Practice. 🚀 Recommendation: The Hybrid Approach Don't force Signals to do RxJS's job. 1. Use RxJS (`combineLatest`) to orchestrate the events and handle the async race conditions. 2. Use Signals (`toSignal`) to expose the final synchronous result to the template. Rule of Thumb: Signals are for Synchronous State; RxJS is for Asynchronous Events. Do you have an alternative opinion? I would appreciate comments on different approaches. #Angular #RxJS #WebDevelopment #SoftwareArchitecture #JavaScript #TypeScript
To view or add a comment, sign in
-
-
Ever struggled with a CORS error that just wouldn’t go away? You deploy your frontend. You connect your backend API. You open the browser. And boom. “No ‘Access-Control-Allow-Origin’ header present…” -> You check headers. -> You Google. -> You scroll through StackOverflow. -> You tweak configs. -> You restart the server. -> 30 minutes gone. As developers working with APIs, microservices, REST endpoints, React, Angular, Node.js, Django, FastAPI, Nginx, or Spring Boot — we’ve all faced the frustration of debugging Cross-Origin Resource Sharing (CORS) issues. -> The browser error messages are vague. -> The root cause isn’t obvious. -> Preflight (OPTIONS) failures make it worse. So I built something to solve this properly. 🚀 Introducing CORS Doctor — a deterministic, fully offline DevTools extension that instantly analyzes CORS errors and tells you exactly what went wrong. -> No AI. -> No cloud calls. -> No data tracking. Just rule-based, accurate diagnosis. 🔎 It detects: • Missing Access-Control-Allow-Origin • Failed preflight (OPTIONS not handled) • Credential + wildcard conflicts • Missing Allow-Headers or Allow-Methods • Mixed content (HTTP vs HTTPS) • Header mismatches • CORS misconfigurations in backend servers 🛠 It explains: • Why the browser blocked the request • Which header is missing • What configuration needs to change • Copy-ready backend fixes Everything works locally inside DevTools. Enterprise-friendly. Privacy-first. Secure. CORS debugging shouldn’t take 30 minutes. It should take 30 seconds. If you’re building APIs, working on microservices architecture, or doing frontend-backend integration — this might save you hours. Get it from here: https://lnkd.in/guhJprY4 #CORS #WebDevelopment #FrontendDevelopment #BackendDevelopment #DevTools #ChromeExtension #APIDevelopment #ReactJS #NodeJS #Django #FastAPI #Nginx #SpringBoot #Microservices #SoftwareEngineering #FullStackDeveloper #DeveloperTools #Productivity
To view or add a comment, sign in
-
⚡ 𝗪𝗵𝘆 𝗥𝗲𝗮𝗹 𝗣𝗿𝗼𝗷𝗲𝗰𝘁𝘀 𝗨𝘀𝗲 𝗥𝗲𝗮𝗰𝘁 𝗤𝘂𝗲𝗿𝘆 𝗜𝗻𝘀𝘁𝗲𝗮𝗱 𝗼𝗳 𝘂𝘀𝗲𝗘𝗳𝗳𝗲𝗰𝘁 In my previous posts, I handled API calls using 𝘂𝘀𝗲𝗘𝗳𝗳𝗲𝗰𝘁 + 𝘂𝘀𝗲𝗦𝘁𝗮𝘁𝗲. I also used to think that was enough 😅 But when projects start growing, things get complicated very fast. Because managing API data manually means handling: • 𝗟𝗼𝗮𝗱𝗶𝗻𝗴 𝘀𝘁𝗮𝘁𝗲 • 𝗘𝗿𝗿𝗼𝗿 𝘀𝘁𝗮𝘁𝗲 • 𝗥𝗲𝘁𝗿𝘆 𝗹𝗼𝗴𝗶𝗰 • 𝗖𝗮𝗰𝗵𝗶𝗻𝗴 • 𝗥𝗲𝗳𝗲𝘁𝗰𝗵𝗶𝗻𝗴 𝘄𝗵𝗲𝗻 𝘂𝘀𝗲𝗿 𝗿𝗲𝘁𝘂𝗿𝗻𝘀 • 𝗞𝗲𝗲𝗽𝗶𝗻𝗴 𝗱𝗮𝘁𝗮 𝗳𝗿𝗲𝘀𝗵 That’s a lot of responsibility for just useEffect. 🤔 𝗦𝗼 𝘄𝗵𝗮𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 𝗶𝗻 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗽𝗿𝗼𝗷𝗲𝗰𝘁𝘀? Most production-level React apps don’t manage server data manually. They use tools like 𝗥𝗲𝗮𝗰𝘁 𝗤𝘂𝗲𝗿𝘆 (𝗧𝗮𝗻𝗦𝘁𝗮𝗰𝗸 𝗤𝘂𝗲𝗿𝘆). Why? 𝗕𝗲𝗰𝗮𝘂𝘀𝗲 𝗥𝗲𝗮𝗰𝘁 𝗤𝘂𝗲𝗿𝘆: ✔ Automatically caches data ✔ Retries failed requests ✔ Refetches in the background ✔ Keeps server state in sync ✔ Reduces boilerplate code Instead of writing extra logic again and again, you let a library handle server state for you. 🧠 𝗜𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 useEffect is not wrong. But it’s built for side effects — not full server-state management. That’s the difference between: 👉 Making something work 𝘃𝘀 👉 Building something scalable Learning this shifted how I think about frontend development. 𝗙𝗿𝗼𝗻𝘁𝗲𝗻𝗱 𝗶𝘀 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗳𝗲𝘁𝗰𝗵𝗶𝗻𝗴 𝗱𝗮𝘁𝗮. 𝗜𝘁’𝘀 𝗮𝗯𝗼𝘂𝘁 𝗺𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝘀𝗲𝗿𝘃𝗲𝗿 𝗱𝗮𝘁𝗮 𝘀𝗺𝗮𝗿𝘁𝗹𝘆. Here’s a simple example of fetching Users API. On the left → 𝗠𝗮𝗻𝘂𝗮𝗹 𝘂𝘀𝗲𝗘𝗳𝗳𝗲𝗰𝘁 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵 On the right → 𝗥𝗲𝗮𝗰𝘁 𝗤𝘂𝗲𝗿𝘆 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵 Less boilerplate. Built-in caching. Cleaner logic. Which one would you prefer in a 𝗿𝗲𝗮𝗹 𝗽𝗿𝗼𝗷𝗲𝗰𝘁? 👇 #ReactJS #FrontendDevelopment #JavaScript #WebDevelopment #LearningInPublic #ReactQuery
To view or add a comment, sign in
-
-
🚀 Just wrapped up my third project using TanStack React Query — and I'm never going back! Every project I use it in, I fall more in love with it. If you're a frontend developer still managing server state manually with useEffect and useState, you're making your life harder than it needs to be. Here's why React Query has become a non-negotiable part of my stack: ⚡ Automatic Caching — Data is cached out of the box. No more redundant API calls for data you already fetched two seconds ago. 🔄 Background Refetching — Your UI stays fresh without the user doing anything. React Query refetches stale data silently in the background. 📡 Loading & Error States Made Easy — Gone are the days of managing a dozen boolean flags. isLoading, isError, and data — clean and simple. 🧹 Less Boilerplate — What used to take 50+ lines of custom hook logic now takes less than 10. Your components stay lean and readable. 🔁 Smart Retry Logic — Failed requests are retried automatically. You get resilience built in without writing a single extra line. 📦 Pagination & Infinite Scroll — Features that used to be a headache are now first-class citizens with useInfiniteQuery. 🛠️ DevTools — The built-in devtools give you full visibility into your queries, cache, and refetch behavior. Debugging has never been this satisfying. Three projects in and I can confidently say — TanStack React Query doesn't just improve your code, it improves the way you think about data fetching. If you haven't tried it yet, your next project is the perfect excuse. 🙌 #ReactQuery #TanStack #React #FrontendDevelopment #WebDevelopment #JavaScript #ReactJS #Frontend
To view or add a comment, sign in
-
-
Node.js is single-threaded... or is it? Meet libuv, the hidden powerhouse. 🏗️ We all know the mantra: "Node.js is single-threaded and non-blocking." But have you ever stopped to ask how a single thread can handle thousands of concurrent database queries and file reads without breaking a sweat? The answer isn't just "Magic"—it’s libuv. 🧐 What is libuv? Libuv is a multi-platform C library originally written for Node.js. While the V8 engine handles your JavaScript, libuv handles everything else: the Event Loop, the Thread Pool, and all things Asynchronous. 🛠️ The 2 Secret Weapons of libuv: 1. The Event Loop (The Conductor) 🎼 This is the heart of Node.js. It manages the execution of callbacks. It doesn’t do the heavy lifting itself; instead, it coordinates tasks across different phases (Timers, I/O Polling, Check, etc.). 2. The Thread Pool (The Workers) 👷♂️ Wait, I thought Node was single-threaded? JavaScript execution is, but libuv maintains a Thread Pool (4 threads by default). When you do something "heavy" like: Reading a file (fs.readFile) Hashing a password (crypto.pbkdf2) DNS lookups ...libuv offloads these tasks to its worker threads so your main thread stays free to handle new requests. 🔄 How it works in 3 steps: The Request: You call an async function in JS. The Hand-off: Node.js passes the task to libuv. Libuv either asks the OS for help (for networking) or uses its Thread Pool (for files). The Callback: Once the task is done, libuv pushes the callback into the Event Loop to be executed back in your JavaScript code. 💡 Why should you care? Understanding libuv is the difference between a developer who just writes code and an engineer who knows how to optimize it. Know when your thread pool is a bottleneck. Understand why setImmediate and setTimeout behave differently. Learn to scale apps by tweaking UV_THREADPOOL_SIZE. Are you diving into Node.js internals this year? Drop a "Building" in the comments if you want more deep dives into the Node.js architecture! 👇 #NodeJS #Backend #SoftwareEngineering #libuv #JavaScript #WebPerf #ProgrammingTips
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development