Most JavaScript developers use asynchronous code every day. Far fewer can tell you why it works. Because I’ve been interested in this recently, here’s a short post about what’s actually happening, entirely written by a human (me)! JavaScript is single-threaded. That means one call stack, one thing happening at a time. So how does it handle thousands of concurrent network requests, timers, and user events without grinding to a halt? The answer isn’t multithreading, it’s actually the event loop! When you call a function, it gets pushed onto the call stack. When it returns, it gets popped off. Synchronous code is straightforward: each frame executes, resolves, and clears. The problem is that some operations, fetching data, reading files, waiting on a timer, take time. If JavaScript blocked the call stack waiting for them, your entire UI would freeze. So instead, it delegates. When you call setTimeout or fetch, the browser hands that work off to a Web API running outside the JavaScript engine entirely. Your code keeps executing. When the Web API finishes, the timer fires, the response arrives, it doesn’t interrupt whatever is currently running. Instead, it pushes a callback onto the task queue and waits. The event loop has one job: check whether the call stack is empty. If it is, it picks the next callback off the task queue and pushes it onto the stack. That’s it. That’s the whole mechanism. Promises don’t use the task queue. They use the microtask queue, which has higher priority. After every task completes, the event loop drains the entire microtask queue before picking up the next task. Every resolved Promise, every awaited expression, every .then() callback: microtask queue. This is why the following output might surprise you: console.log('start'); setTimeout(() => console.log('timeout'), 0); Promise.resolve().then(() => console.log('promise')); console.log('end'); // Output: // start // end // promise // timeout The setTimeout fires with a 0ms delay, but it still hits the task queue. The Promise resolves synchronously but its callback hits the microtask queue. The microtask queue always wins. What does this mean in practice? Understanding this model changes how you write async code. Unintentionally flooding the microtask queue with chained Promises can starve the task queue and delay rendering. Long synchronous operations on the call stack block everything, regardless of how much async code surrounds them. And if you’ve ever wondered why two awaited calls run sequentially when they could run in parallel, that’s a call stack problem, solved cleanly with Promise.all. If this was useful, I write about TypeScript, system design, and software engineering here. I’m always down to connect!
How JavaScript's Event Loop Works and Async Code Best Practices
More Relevant Posts
-
🚀 Day 25 – Async JavaScript (Real-Time + Coding) Today I focused on both theory and coding for async JavaScript concepts. 🔹async/await 👉 Used async/await for cleaner and more readable asynchronous code instead of chaining .then(). 🔹 Promise.all (Parallel API Calls): Used to handle multiple API calls in parallel. const [user, orders] = await Promise.all([ fetch('/api/user').then(res => res.json()), fetch('/api/orders').then(res => res.json()) ]); 👉 Real-time use: Fetching multiple APIs together (user data + orders) to reduce loading time. -->Fetching user details, orders, and notifications together instead of waiting one by one. 🔹 Parallel vs Sequential Calls ✅ Sequential (Slower): Executes one after another; used when tasks depend on previous results, but increases total execution time. const user = await fetch('/api/user').then(res => res.json()); const orders = await fetch('/api/orders').then(res => res.json()); 👉 Real-time: Each API waits for the previous one → increases load time. ✅ Parallel (Faster): Executes multiple tasks simultaneously; used when tasks are independent, reducing overall loading time and improving performance. const [user, orders] = await Promise.all([ fetch('/api/user').then(res => res.json()), fetch('/api/orders').then(res => res.json()) ]); 👉 Real-time: Both APIs run together → faster response → better user experience. 🔹 Retry API Call : Implemented retry logic for failed requests. async function fetchWithRetry(url, retries = 3) { try { const res = await fetch(url); if (!res.ok) throw new Error("Failed"); return await res.json(); } catch (err) { if (retries > 0) { return fetchWithRetry(url, retries - 1); } else { throw err; } } } 👉 Real-time use: Handles temporary failures like network issues. --> Useful for handling network issues or temporary API failures without breaking the app. 🔹 Event Loop (Execution Order) : Understood how JavaScript handles async tasks (Microtask vs Macrotask) console.log("Start"); setTimeout(() => console.log("Timeout"), 0); Promise.resolve().then(() => console.log("Promise")); console.log("End"); 👉 Output: Start → End → Promise → Timeout 👉 Real-time use: Helps debug async execution issues -->In real-time: Helps in debugging issues like delayed UI updates or unexpected execution order #JavaScript #AsyncJS #FrontendDevelopment #Angular #CodingJourney
To view or add a comment, sign in
-
Hello Connections! Days 25–30 of My 40-Day JavaScript & React Relearning Journey This week was more about going back to core JavaScript concepts and connecting them with how React actually works under the hood. Instead of just writing code, I focused on understanding how things execute, handle events, and impact performance. What I revisited 1. Event Loop (JavaScript Core) Understanding how JavaScript handles asynchronous operations: Call Stack Web APIs Callback Queue Event Loop This helped me clearly see why async code behaves the way it does. 2. Event Delegation Instead of attaching multiple event listeners, we can use a single listener on a parent: document.getElementById("parent").addEventListener("click", (e) => { if (e.target.tagName === "BUTTON") { console.log("Button clicked"); } }); This improves performance and is widely used in real applications. 3. Event Handling in React React handles events differently using synthetic events. <button onClick={handleClick}>Click</button> Also revisited: onChange onSubmit Preventing default behavior 4. Missing but Important Topics I Covered Debouncing & Throttling (again, but deeper use cases) const debounce = (fn, delay) => { let timer; return (...args) => { clearTimeout(timer); timer = setTimeout(() => fn(...args), delay); }; }; Used in: search inputs API optimization 5. Controlled Components (React Forms) Managing form inputs using state: const [value, setValue] = useState(""); <input value={value} onChange={(e) => setValue(e.target.value)} /> 6. Lifting State Up Sharing state between components by moving it to a common parent. This made component communication much clearer. 7. Basic Error Handling Understanding how to handle errors in async operations: try { const res = await fetch(url); } catch (err) { console.error(err); } What changed for me Earlier, I was focused on React features. This week reminded me that strong JavaScript fundamentals are what actually power React. Now I’m thinking more about: how events flow how async code executes how to optimize interactions how to structure better components Realization The more I revisit, the more I realize: You don’t need more frameworks. You need deeper understanding of the basics. Still learning, still refining — one step at a time. #JavaScript #ReactJS #FrontendDevelopment #LearningInPublic #WebDevelopment #DeveloperJourney
To view or add a comment, sign in
-
TypeScript: The "Second Pair of Eyes" that catches mistakes before they become bugs. Writing test automation in pure JavaScript can sometimes feel like a high-speed guessing game. It’s flexible and fast, until you hit that one undefined is not a function error deep in your execution. Moving your framework to TypeScript is like finally getting a roadmap for a territory where you previously had to rely on memory and luck. TypeScript acts as a vigilant sidekick, pointing out potential issues while you’re still typing, so you don't have to wait for a failing CI/CD pipeline to tell you something is wrong. Why adding types to your tests is a massive productivity boost: Autocomplete that actually works: Say goodbye to the "documentation ping-pong" where you constantly switch files just to check if an object property was named userID, userId, or u_id. With TS, your IDE knows exactly what’s inside your Page Objects and API responses, offering you perfect suggestions and saving you from those annoying failures driven by simple naming mismatches. Contracts that keep everyone honest: When testing APIs, you can define an Interface that acts as a blueprint for your data. If the backend team changes a field from a string to a number, your code will highlight the discrepancy immediately. It’s like having an automatic gatekeeper for your business logic. Refactoring without the "Scavenger Hunt": Need to rename a core method in your framework? In JS, it’s often a risky game of "Find and Replace" followed by hoping you didn't break a test in another folder. In TS, you rename it once, and the compiler instantly shows you exactly where updates are needed. It’s a clean, surgical way to evolve your code. Self-Documenting Code for the Win: Types serve as documentation that never goes out of date. When a new engineer joins the team, they don’t have to guess what a function expects—the code explains itself. This makes the onboarding process much smoother and reduces the "what does this variable actually do?" questions. Sustainable automation thrives on a balance between catching application bugs and keeping the test code itself reliable and maintainable. Adopting TypeScript allows your team to focus on the actual quality of the product, instead of spending time debugging the "identity crisis" of your JavaScript variables. Do you enjoy the total freedom of JavaScript, or do you prefer the organized structure of TypeScript? Let’s be honest: what’s the most time you’ve ever spent chasing a bug that turned out to be a classic type mismatch, like trying to map over an undefined that was supposed to be an array? #TypeScript #JavaScript #TestAutomation #SDET #CleanCode #SoftwareEngineering #TestGeeks
To view or add a comment, sign in
-
-
🚀 TypeScript Best Practices: The Comma (,) vs. The Semicolon (;) Whether you're a seasoned engineer or just starting your TypeScript journey, small syntax choices can make a huge difference in code readability and maintainability. One of the most common questions for developers transitioning from JavaScript is: "When do I use a comma versus a semicolon?" Here is a quick breakdown to keep your enterprise-grade codebase clean and consistent. 🏗️ The Semicolon (;): Defining the Blueprint When you are defining Interfaces or Type Aliases, you are creating a "set of instructions" or a contract for the compiler, not actual data. Best Practice: Use semicolons to terminate members in a structural definition. Why? Interfaces are conceptually similar to class definitions. The semicolon tells the TypeScript compiler that the definition of that specific property or method is complete, acting as a clear boundary for the engine. TypeScript // ✅ Good: Clear separation of structural definitions interface User { id: number; name: string; email: string; } 💎 The Comma (,): Handling the Data When you move from defining a type to creating an Object Literal, you are working with live data in the JavaScript runtime. Best Practice: Use commas to separate key-value pairs in an object. Why? In JavaScript, an object is essentially a list of data. Commas are the standard delimiter for items in a list, just like in an array. Using a semicolon inside an object literal is a syntax error that will break your build! TypeScript // ✅ Good: Standard JavaScript object notation const activeUser: User = { id: 1, name: "John Doe", email: "dev@example.com", // Trailing commas are great for cleaner Git diffs! }; 💡 Senior Dev Tips for Your Workflow Visual Distinction: While TS technically allows commas in interfaces, sticking to semicolons helps you distinguish "Types" from "Objects" at a glance during rapid code reviews. Watch the Typos: Ensure your implementation strictly follows your interface—watch out for common spelling slips like balance vs balence which can lead to runtime headaches. Accessibility First: Remember that clean code is accessible code—maintaining strict typing and clear syntax supports better documentation for everyone on the team. What's your preference? Do you stick to semicolons for types to keep things "classy," or do you prefer the comma-everywhere approach? Let's discuss in the comments! 👇 #TypeScript #WebDevelopment #CodingBestPractices #FrontendEngineering #CleanCode #JavaScript #SeniorDeveloper
To view or add a comment, sign in
-
Understanding JavaScript Runtime: From Call Stack to Event Loop (Deep Dive) Most developers use JavaScript daily, but very few truly understand what happens under the hood when code executes. If you want to think like an engineer—not just a coder—you need clarity on the JavaScript runtime model. Let’s break it down. 🔹 JavaScript Engine vs Host Environment JavaScript itself is just a language specification. It does not include things like DOM, timers, or APIs. JavaScript Engine (e.g., V8) Handles: Execution of JS code Memory allocation (Heap) Execution context (Call Stack) Garbage collection Host Environment (Browser / Node.js) Provides: Web APIs (setTimeout, fetch, DOM, events) Event loop Callback queue 👉 Key Insight: JavaScript alone is synchronous. Asynchronous behavior comes from the host environment. 🔹 Memory Model (Heap vs Call Stack) 1. Heap (Memory Allocation) Stores objects, functions, arrays Dynamically allocated Managed by Garbage Collector 2. Call Stack (Execution Context) Executes functions in LIFO (Last In First Out) Each function call creates a stack frame Example: function a() { b(); } function b() { console.log("Hello"); } a(); Call Stack: a() → b() → console.log() 🔹 Web APIs & Async Behavior When you use: setTimeout(() => console.log("Done"), 1000); What actually happens? setTimeout is handed over to Web API Timer runs outside JS engine After 1 second → callback goes to Callback Queue 🔹 Event Loop (The Heart of Async JS) The Event Loop continuously checks: IF (Call Stack is empty) THEN move task from Callback Queue → Call Stack This is why JavaScript can handle async tasks despite being single-threaded. 🔹 Callback Queue Holds tasks like: setTimeout callbacks DOM events (click, load) async operations Example queue: [onClick, onLoad, setTimeout callback] 🔹 Full Flow (Putting It All Together) Code enters Call Stack Async operations move to Web APIs Results go to Callback Queue Event Loop pushes them back to Call Stack when it's empty 🔹 Why This Matters (Real Engineering Insight) Understanding this helps you: Avoid blocking the main thread Debug async bugs (race conditions, delays) Optimize performance (debouncing, throttling) Master promises, async/await, and concurrency patterns 🔹 Final Thought JavaScript is not “asynchronous by nature.” It’s the combination of the JS engine + host environment + event loop that creates the illusion of concurrency. Once you truly understand this model, you stop guessing—and start engineering. If you're serious about becoming a high-level developer, don’t just write code. Understand how it runs. #JavaScript #WebDevelopment #Frontend #Programming #SoftwareEngineering #V8 #EventLoop
To view or add a comment, sign in
-
-
JavaScript has two completely different systems for sharing code between files. some developers use both without knowing which one they're using - or why it is necessary to know. Every JavaScript file you've ever written that shares code with another file, uses a module system. Some developers use them daily without truly understanding what separates one from the other. Though the gap is small. The consequences of it are not. Modules are a way to break your code into separate, reusable pieces rather than writing everything in one enormous file. It makes the code clean, organised and maintainable. There are two main systems for doing this in JavaScript. And they are not the same. CommonJS: - Uses require() to bring in modules and uses module.exports to send them - It loads modules synchronously (blocks execution) - It is mostly used in older Node.js environments Example: -> Exporting module.exports = { greet: () => console.log("Hello") }; -> Importing const myModule = require('./myModule.js'); myModule.greet(); -> "Hello" How CommonJS loads: It loads synchronously - meaning JavaScript stops everything and waits for the module to fully load before moving on. This is Problematic for browsers. ES Modules (ESM): The current standard for browsers and modern Node.js. It is cleaner, more powerful, and built for performance. -> Named export export const greet = () => console.log("Hello"); -> Default export export default function greet() { console.log("Hello"); } -> Importing named import { greet } from './myModule.js'; -> Importing default import greet from './myModule.js'; Dynamic Imports ES Modules loads only what you need, only when you need it. When you hear "you're not splitting," this is what they mean. Instead of loading every module upfront - dynamic imports let you load a module on demand, only when it's actually needed. It results to a faster initial load times. ES Modules import() returns a Promise - so it works with .then() or async/await. It is also supported natively in ESM and available in CJS environments too. ES Modules aren't just a syntax choice. They're an architectural decision. The right system, used the right way, is the difference between an app that loads instantly and one that makes users wait. And users don't like to wait. So if you need your code base to be performant, you have your answer.
To view or add a comment, sign in
-
🚀 Today we are going to analyse the JavaScript microtask queue, macrotask queue, and event loop. A junior developer once asked me during a code review: "Why does Node.js behave differently even when the code looks simple?" So I gave him a small JavaScript snippet and asked him to predict the output. console.log("Start"); setTimeout(() => { console.log("Timeout"); }, 0); Promise.resolve().then(() => { console.log("Promise"); }); console.log("End"); He answered confidently: Start Timeout Promise End But when we ran the code, the output was: Start End Promise Timeout He looked confused. That’s when we started analysing how JavaScript actually works internally. 🧠 Step 1: JavaScript is Single Threaded JavaScript runs on a single thread. It executes code line by line inside the call stack. So first it runs: console.log("Start") → Start console.log("End") → End Now the stack becomes empty. ⚙️ Step 2: Macrotask Queue setTimeout goes to the macrotask queue. Even though timeout is 0ms, it does not execute immediately. It waits in the macrotask queue. Examples of macrotasks: • setTimeout • setInterval • setImmediate • I/O operations • HTTP requests ⚡ Step 3: Microtask Queue Promise goes to the microtask queue. Examples of microtasks: • Promise.then() • Promise.catch() • Promise.finally() • process.nextTick (Node.js) • queueMicrotask() Microtasks always get higher priority. They execute before macrotasks. 🔁 Step 4: Event Loop Now the event loop starts working. The event loop checks: Is the call stack empty? Yes Check microtask queue Execute all microtasks Then execute macrotasks So execution becomes: Start End Promise Timeout Now everything makes sense. 🏗️ Real Production Example Imagine a Node.js API: app.get("/users", async (req, res) => { console.log("Request received"); setTimeout(() => console.log("Logging"), 0); await Promise.resolve(); console.log("Processing"); res.send("Done"); }); Execution order: Request received Processing Logging Why? Because Promise (microtask) runs before setTimeout (macrotask). This directly affects: • API response time • Logging • Background jobs • Queue processing • Performance optimization 🎯 Why Every Node.js / NestJS / Next.js Developer Should Know This Because internally: • Async/Await uses Promises • API calls use Event Loop • Background jobs use Macrotasks • Middleware uses Microtasks • Performance depends on queue execution Without understanding this, debugging production issues becomes very difficult. 💡 Final Thought JavaScript is not just a language. It is an event-driven execution engine. If you understand microtask queue, macrotask queue, and event loop, you don’t just write code — you understand how the runtime thinks. And once you understand the runtime, you start building faster and more scalable systems. #JavaScript #NodeJS #EventLoop #Microtasks #Macrotasks #NextJS #NestJS #SystemDesign #SoftwareEngineering
To view or add a comment, sign in
-
-
𝗧𝗼𝗽𝗶𝗰 𝟭𝟮: 𝗘𝘃𝗲𝗻𝘁 𝗟𝗼𝗼𝗽 (𝗠𝗮𝗰𝗿𝗼 𝘃𝘀 𝗠𝗶𝗰𝗿𝗼𝘁𝗮𝘀𝗸𝘀) JavaScript is single-threaded, yet it handles thousands of concurrent operations without breaking a sweat. The secret isn't raw speed; it's an incredibly strict, ruthless prioritization engine. If you don't understand the difference between a macro and a microtask, your "asynchronous" code is a ticking time bomb for UI freezes. Summary: The Event Loop is the conductor of JavaScript's concurrency model. It continuously monitors the Call Stack and the Task Queues. To manage asynchronous work, it uses two distinct queues with vastly different priorities: Macrotasks (like setTimeout, setInterval, network events) and Microtasks (like Promises and MutationObserver). Crux: The VIP Queue: Microtasks have absolute priority. The engine will not move to the next Macrotask, nor will it allow the browser to re-render the screen, until the entire Microtask queue is completely empty. The Normal Queue: Macrotasks execute one by one. After a single Macrotask finishes, the Event Loop checks the Microtask queue again. The Starvation Risk: Because Microtasks can spawn other Microtasks, a recursive Promise chain can hold the main thread hostage, permanently blocking the UI from updating. The Deep Insight (Architect's Perspective): Traffic Control and Yielding: As architects, we must visualize the Event Loop as a traffic control system where Microtasks are emergency vehicles. When you resolve a massive chain of Promises or heavy async/await logic, you are flooding the intersection with ambulances. The browser's rendering engine—the normal traffic—is forced to sit at a red light until every single emergency vehicle has cleared. We don't just use async patterns for code readability; we must actively manage thread occupancy. For heavy client-side computations, we must intentionally interleave Macrotasks (like setTimeout(..., 0)) to force our code to yield control back to the Event Loop, allowing the browser to paint frames and keeping the UI responsive. Tip: If you have to process a massive dataset on the frontend (e.g., parsing a huge JSON file or formatting thousands of rows), do not use Promise.all on the entire set at once. That floods the Microtask queue and locks the UI. Instead, "chunk" the array and process each chunk using setTimeout or requestAnimationFrame. This gives the Event Loop room to breathe and the browser a chance to render 60 frames per second between your computation chunks. Tags: #SoftwareArchitecture #JavaScript #WebPerformance #EventLoop #FrontendEngineering #CleanCode
To view or add a comment, sign in
-
-
Building projects is the best way to solidify JavaScript skills. Here are 10 project ideas that start simple and build up, covering DOM manipulation, APIs, events, and more. Each includes key features to implement—grab a code editor and start coding! 1. To-Do List App – Add, delete, and mark tasks as complete with checkboxes. – Use localStorage to persist data across browser sessions. – Bonus: Add categories or due dates for organization. 2. Weather App – Fetch real-time weather data using the OpenWeatherMap API (free key needed). – Display temperature, humidity, city search, and weather icons. – Bonus: Show forecasts for the next few days. 3. Quiz App – Create multiple-choice questions from a JavaScript array or JSON. – Track score, add a timer, and include a restart button. – Bonus: Randomize questions and save high scores. 4. Calculator – Implement basic operations: addition, subtraction, multiplication, division. – Handle edge cases like division by zero or invalid input. – Bonus: Add advanced functions like square root or memory. 5. Image Slider – Build a carousel with next/prev buttons and auto-slide functionality. – Include dot indicators for navigation and optional fade transitions. – Bonus: Make it responsive for mobile swipe gestures. 6. Form Validator – Validate fields like email, password strength, and required inputs in real-time. – Display dynamic error/success messages with CSS classes. – Bonus: Submit valid forms to a mock API or email service. 7. Typing Speed Test – Display a paragraph or sentence for users to type. – Calculate words per minute (WPM), accuracy, and error count. – Bonus: Add multiple test lengths and a leaderboard. 8. Random Quote Generator – Pull quotes from an array or API like Quotable.io. – Refresh with a button and add share options
To view or add a comment, sign in
-
-
Golang vs JavaScript: A Systems-Level Perspective The comparison between Go and JavaScript is often oversimplified as performance vs flexibility. In practice, the differences are rooted in runtime models, concurrency, and operational behavior. JavaScript (Node.js) uses a single-threaded event loop with non-blocking I/O. This makes it highly effective for I/O-bound workloads such as APIs, real-time apps, and streaming services. However, CPU-bound tasks can become bottlenecks. While worker threads exist, they are not the default model and add complexity in communication and design. Go is built with concurrency as a first-class concept. Goroutines are lightweight and managed by the Go scheduler, allowing thousands of concurrent tasks with minimal overhead. Channels provide a structured way to communicate between them, making concurrent systems easier to design compared to callback or promise-based patterns. In terms of performance, Go generally provides better throughput and lower latency for CPU-intensive and highly concurrent workloads. Its compiled nature and efficient runtime contribute to more predictable performance. JavaScript, powered by V8, is highly optimized but still constrained by the event loop under heavy load. Memory management is another differentiator. Go offers more predictable memory usage, which is important in systems where resource control matters. JavaScript abstracts memory handling, which improves development speed but can introduce unpredictability at scale. The ecosystem is where JavaScript dominates. With npm and widespread adoption, it enables full-stack development using a single language. Go’s ecosystem is smaller but more opinionated, with a strong standard library and fewer abstractions. From an operational standpoint, Go produces self-contained binaries that simplify deployment and reduce environment-related issues. Node.js applications depend on runtime environments and package management, which can add complexity in production. The practical takeaway: JavaScript is optimized for developer productivity and rapid iteration, especially in frontend and I/O-heavy systems. Go is optimized for performance, concurrency, and operational simplicity in backend services. The decision should be driven by system requirements, not language preference.
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development