𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 𝗶𝘀 𝘀𝗶𝗻𝗴𝗹𝗲-𝘁𝗵𝗿𝗲𝗮𝗱𝗲𝗱. But your async code doesn't run in the order you think. Most developers get this wrong — including seniors. What does this print? console.log('1') setTimeout(() => console.log('2'), 0) Promise.resolve().then(() => console.log('3')) console.log('4') Take 10 seconds. Write your answer. Then keep reading. ━━━━━━━━━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗮𝗻𝘀𝘄𝗲𝗿: 1 → 4 → 3 → 2 Most people predict: 1 → 4 → 2 → 3 They're wrong. Here's exactly why. ━━━━━━━━━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗲𝘃𝗲𝗻𝘁 𝗹𝗼𝗼𝗽 𝗵𝗮𝘀 𝘁𝘄𝗼 𝗾𝘂𝗲𝘂𝗲𝘀, 𝗻𝗼𝘁 𝗼𝗻𝗲. 𝗠𝗶𝗰𝗿𝗼𝘁𝗮𝘀𝗸 𝗾𝘂𝗲𝘂𝗲 → Promises, queueMicrotask(), MutationObserver 𝗠𝗮𝗰𝗿𝗼𝘁𝗮𝘀𝗸 𝗾𝘂𝗲𝘂𝗲 → setTimeout, setInterval, I/O, UI events The rule nobody tells you: After every task, the engine drains the ENTIRE microtask queue before picking the next macrotask. Not one microtask. ALL of them. ━━━━━━━━━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗲𝘅𝗮𝗰𝘁 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗼𝗿𝗱𝗲𝗿: Step 1 — Run the call stack (synchronous code first) → prints '1', queues setTimeout, queues Promise, prints '4' Step 2 — Call stack is empty. Check microtask queue. → Promise.then is there → prints '3' → Microtask queue now empty. Step 3 — Now pick the next macrotask. → setTimeout callback → prints '2' setTimeout(fn, 0) does NOT mean "run immediately." It means "run after all microtasks are done." ━━━━━━━━━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗶𝗺𝗽𝗮𝗰𝘁: This is why React state updates inside Promises resolve before a setTimeout that was queued at the same time. This is why async/await in Node.js doesn't block I/O — I/O callbacks are macrotasks, but .then() chains are microtasks that run between them. And this is the trap: If you keep creating microtasks inside microtasks, you can starve the macrotask queue permanently. setTimeout never fires. UI never updates. ━━━━━━━━━━━━━━━━━━━━━━━ 𝗡𝗼𝘄 𝘁𝗵𝗲 𝗵𝗮𝗿𝗱𝗲𝗿 𝗼𝗻𝗲: console.log('start') setTimeout(() => console.log('timeout'), 0) Promise.resolve() .then(() => { console.log('promise 1') return Promise.resolve() }) .then(() => console.log('promise 2')) console.log('end') What's the output? Drop your answer below — I'll reply with the explanation. #JavaScript #FrontendDevelopment #ReactJS #NodeJS #SoftwareEngineering #ImmediateJoiner #OpenToWork #FrontendDeveloper #React #ReactDeveloper
Why Async Code Doesn't Run in Order in JavaScript
More Relevant Posts
-
In Node.js, writing scalable backend code is not just about APIs… it’s about how you manage logic, state, and performance. That’s where Closures and Higher-Order Functions (HOF) play a key role. 🔥 Core Concepts (Node.js Perspective) ✔ Closures Preserve data across function calls Help avoid global variables Useful for managing request-level state ✔ Higher-Order Functions (HOF) Enable reusable and flexible logic Power middleware and async handling Make code modular and clean ⚡ Real-Time Node.js Use Cases ✅ 1. Middleware Design (Express.js) HOF used to wrap request handlers Closures store request-specific data 👉 Example: auth middleware, logging middleware ✅ 2. API Rate Limiting Closures maintain request count per user Prevents server overload in real-time systems ✅ 3. Caching Layer (Performance Optimization) Closures store previous API responses Reduce DB calls → faster response time ✅ 4. Booking / Payment Flow (Real Projects) Maintain state across multiple API calls Example: travel booking → availability → payment → confirmation ✅ 5. Error Handling Wrapper (HOF) Create reusable async error handlers Avoid repeating try-catch in every API ✅ 6. Custom Logger & Monitoring HOF wraps APIs for logging Closures store metadata like request time, user info 💡 Why It Matters in Node.js • Improves server performance • Helps handle high concurrent requests • Keeps code clean & scalable • Essential for event-driven architecture 🧠 Final Thought In Node.js, Closures + HOF are not optional… They are behind the scenes of every efficient backend system — from middleware to API handling.#NodeJS #JavaScript #BackendDevelopment #MERNStack #ExpressJS #FullStackDeveloper #SoftwareEngineering #APIDevelopment #Tech #Developers #CodingLife
To view or add a comment, sign in
-
-
Understanding Async vs Sync API Handling in Node.js (A Practical Perspective) When building scalable backend systems, one concept that truly changes how you think is synchronous vs asynchronous API handling. Let’s break it down in a simple, real-world way. Synchronous (Blocking) Execution In a synchronous flow, tasks are executed one after another. Example: - Request comes in - Server processes it - Only after completion → next request is handled Problem: If one operation takes time (like a database query or external API call), everything waits. This leads to: - Poor performance - Low scalability - Bad user experience under load Asynchronous (Non-Blocking) Execution Node.js shines because it handles operations asynchronously. Example: - Request comes in - Task is sent to the background (I/O operation) - Server immediately moves to handle the next request - Response is returned when the task completes Result: - High performance - Handles thousands of concurrent users - Efficient resource utilization How Node.js Makes This Possible: - Event Loop - Callbacks / Promises / Async-Await - Non-blocking I/O Instead of waiting, Node.js keeps moving. Real-World Insight: When working with APIs: - Use async/await for clean and readable code - Avoid blocking operations (like heavy computations on the main thread) - Handle errors properly in async flows Final Thought: The real power of Node.js is not just JavaScript on the server — it’s how efficiently it handles concurrency without threads. Mastering async patterns is what separates a beginner from a solid backend engineer. Curious to know: What challenges have you faced while handling async operations? #NodeJS #BackendDevelopment #JavaScript #AsyncProgramming #WebDevelopment
To view or add a comment, sign in
-
🚀 Just published a new article on REST APIs in Backend Development! If you're learning backend engineering or want to strengthen your fundamentals, this guide breaks down everything you need to know - from basics to real-world usage. 🔍 In this article, I cover: ✔ What REST APIs are and why they matter ✔ HTTP methods (GET, POST, PUT, DELETE) explained simply ✔ Request & response structure ✔ Status codes and JSON handling ✔ Real-world examples and best practices Whether you're a beginner or brushing up your concepts, this will help you build a strong foundation in backend development. 📖 Read the full article here: 👉 https://lnkd.in/g_EKnJFJ 💬 I’d love to hear your thoughts and feedback! #BackendDevelopment #RESTAPI #WebDevelopment #SoftwareEngineering #APIs #NodeJS #LearningJourney #Tech
To view or add a comment, sign in
-
🚀 PUT vs PATCH — The REST API detail many developers misunderstand While reviewing backend code recently, I noticed something interesting. Many APIs expose PUT and PATCH endpoints… but treat them exactly the same. That’s a problem. Here’s the difference every backend developer should know: 🎯 Interview Definition PUT: An HTTP method used to replace the entire resource on the server with the data provided in the request payload. PATCH: An HTTP method used to partially update a resource, meaning only the specified fields are modified without affecting the rest of the resource. 🔵 PUT = Replace the entire resource Current user: { "name": "Bob", "email": "bob@gmail.com", "age": 25 } PUT request: PUT /users/1 { "name": "Bob Updated", "email": "bobupdated@gmail.com" } Result: { "name": "Bob Updated", "email": "bobupdated@gmail.com" } ⚠️ Notice something? The age field disappeared because PUT replaces the whole resource. PUT assumes the payload represents the complete new state. 🟡 PATCH = Partial update Current user: { "name": "Bob", "email": "bob@gmail.com", "age": 25 } PATCH request: PATCH /users/1 { "age": 26 } Result: { "name": "Bob", "email": "bob@gmail.com", "age": 26 } Only the age changed. Everything else stayed the same. 📌 Simple rule • PUT → Replace the resource • PATCH → Update part of the resource 💡 Why most modern APIs prefer PATCH ✔ Smaller payloads ✔ Lower risk of overwriting fields ✔ Better for frequent updates ✔ Works well with frontend forms 🔥 Backend tip When designing REST APIs: • Use PUT when the client sends the entire object • Use PATCH when updating specific fields Small API design decisions like this make systems cleaner, safer, and easier to maintain. #BackendDevelopment #REST #APIs #WebDevelopment #NodeJS #SoftwareEngineering #ProgrammingTips #CleanCode #W3Schools
To view or add a comment, sign in
-
-
Just shipped a Node.js AI backend from scratch! Built a production-style LLM server with: Custom system prompt + personality design Free LLM API integration (no keys hardcoded, .env based) Conversation memory (context-aware replies, long-term-ready) Clean REST endpoints, tested via Postman This project forced me to think like a backend engineer and a prompt engineer at the same time – not just “call the model”, but design how it thinks, remembers, and responds. Repo is live on GitHub – open to feedback, suggestions, and collaboration on smarter AI agents 🤝 #NodeJS #BackendDevelopment #LLM #GenerativeAI #APIDevelopment #JavaScript #OpenSource #StudentDeveloper #AIProjects
To view or add a comment, sign in
-
Most developers think TypeScript slows them down. It's actually saving them hours they don't even realize they're losing. Here's the mistake I see constantly: developers use TypeScript like it's just "JavaScript with types added." They define a type once, then scatter "any" everywhere when things get complicated. That's not TypeScript. That's TypeScript with the safety turned off. The real value shows up when you treat your types as the source of truth for your entire system. Here's a concrete example from a project I built recently. I was integrating an LLM API into a NestJS backend. The response shape from the AI model could vary — sometimes a field existed, sometimes it didn't. Instead of handling this with runtime checks scattered across 8 different functions, I defined a strict discriminated union type upfront. The result: – TypeScript caught 3 bugs at compile time before a single API call was made – Every function that touched that data knew exactly what shape to expect – When the AI provider changed their response format, I got one clear error in one place — not silent failures across the app The key takeaway: your types should reflect reality, including the messy parts. Model optional fields, union types, and error states explicitly. Don't pretend data is clean when it isn't. This is one of those habits that separates developers who "know TypeScript" from developers who actually use it well. If you're building anything with external APIs or AI integrations, this approach will save you real debugging time. What's your relationship with TypeScript — friend or frustrating necessity? #TypeScript #FullStackDevelopment #SoftwareEngineering #APIDevelopment
To view or add a comment, sign in
-
I rebuilt CareerQ from scratch. Not a refactor. A complete restart. The original was a tightly coupled Next.js full-stack app built while I was still learning. It worked, but the backend had no identity of its own. Everything was tangled with the frontend. So I separated them. CareerQ v2 is a backend-only Node.js service. No framework opinions. No frontend pulling the architecture in two directions. Here is what the new architecture looks like and why I made each decision: Express.js + TypeScript strict mode No any types. No shortcuts. Every input validated at the boundary before it reaches the service layer. MongoDB for LLM response storage LLM outputs are unpredictable JSON. A flexible document store handles this without fighting schema migrations every time the AI response shape changes. Zod for runtime validation TypeScript catches compile-time errors. Zod catches what TypeScript cannot - malformed requests at runtime. Both layers active on every endpoint. Layered architecture: Routes -> Controllers -> Services -> Models Each layer has one job. Controllers do not touch the database. Services do not know about HTTP. Clean separation makes the codebase testable and replaceable. JWT auth with bcrypt password hashing Stateless authentication. Tokens expire. Passwords never stored in plain text. Centralized error handling One error middleware handles everything - Zod failures, JWT errors, DB errors, LLM parsing failures. No scattered try-catch blocks returning inconsistent response shapes. Paginated interview history Sortable, configurable page size, returns total count and page metadata. Clients never have to guess what is on the next page. The image attached is the actual API response - real AI-generated interview questions coming back from the live service. What is coming next: SSE streaming so LLM tokens arrive in real-time instead of a single blocking response. Redis-backed rate limiting per user on LLM endpoints. Live API: https://lnkd.in/gNcx32et GitHub: https://lnkd.in/gStKFJ3C Building in public. Backend only from here. #NodeJS #TypeScript #BackendDevelopment #LLM #SystemDesign #AI #GenrativeAI #Learning #Growth
To view or add a comment, sign in
-
-
Building a Full-Stack Agentic Research Engine with .NET 9, React, and Ollama 🚀: I’ve officially kicked off Project 1—AI Research Engine that bridges the gap between local data and LLMs. As a Full-Stack Developer, my goal is to build a system that is not just smart, but also scalable and user-friendly. Today’s focus was the Backend Architecture. Here’s a look at my setup (see screenshot): 🔹 The .NET 9 Powerhouse: I'm using the .NET Dependency Injection (DI) container to manage my AI services. By registering SemanticKernelService and ChatService properly, I'm ensuring the application remains loosely coupled. Today it's Ollama; tomorrow I can switch to Azure OpenAI with zero friction. 🔹 The React Frontend: The UI is being built with React & Tailwind CSS. Why? Because handling real-time AI streaming responses and managing complex file upload states (PDFs/YouTube URLs) requires a robust frontend framework. 🔹 Privacy-First with Ollama: By running Llama 3.2 locally, I’m ensuring that no sensitive research data leaves the machine. Privacy is a feature, not an afterthought. Current Tech Stack: 💻 Frontend: React,Tailwind CSS, Vite ⚙️ Backend: .NET 9, Semantic Kernel 🧠 LLM: Ollama (Llama 3.2 & Nomic-Embed) 🗄️ Vector DB: Pinecone & SQLite Next up: Connecting the React frontend to my .NET Ingestion API to start processing real-world documents. The journey to becoming an AI-Orchestrator continues! 🛠️ #FullStack #ReactJS #DotNet #Ollama #SemanticKernel #AIAgents #WebDev #LocalAI #SoftwareArchitecture
To view or add a comment, sign in
-
-
𝗧𝘆𝗽𝗲𝗦𝗰𝗿𝗶𝗽𝘁 𝗣𝗮𝘁𝘁𝗲𝗿𝗻𝘀 𝗜 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗨𝘀𝗲 𝗗𝗮𝗶𝗹𝘆 There is a moment with TypeScript when you stop fighting it. For me it was 2 AM on a Tuesday. A production bug that would have been impossible with good types. That changed how I think about code. This is not a beginner guide. If you are still confused about interface vs type, read something else. This is what is in my head when I code today. The patterns I use without thinking. The ones that saved me. The mistakes I made before I understood them. If I had to keep one pattern it would be this. Discriminated unions. You create a union type. Each variant has a common property like `status` or `kind`. TypeScript uses this to know exactly what you are working with. type ApiResponse<T> = | { status: 'loading' } | { status: 'error'; error: string; code: number } | { status: 'success'; data: T; timestamp: Date }; function renderUser(response: ApiResponse<User>) { switch (response.status) { case 'loading': return <Spinner />; case 'error': // TypeScript knows response.error and response.code exist here return <ErrorMessage message={response.error} code={response.code} />; case 'success': // TypeScript knows response.data and response.timestamp exist here return <UserCard user={response.data} />; } } TypeScript forces you to handle every case. Add a new state like `cancelled`. The compiler tells you exactly where you forgot to handle it. It is like a silent pair programmer. I use this for: - Domain events - UI states - Async operation results Last year I built an e-commerce system. I modeled every order state this way. type OrderState = | { kind: 'draft'; items: CartItem[] } | { kind: 'pending_payment'; orderId: string; total: Money } | { kind: 'paid'; orderId: string; paymentId: string; paidAt: Date } | { kind: 'shipped'; orderId: string; trackingCode: string } | { kind: 'delivered'; orderId: string; deliveredAt: Date } | { kind: 'cancelled'; orderId: string; reason: string }; Each state has only the data it needs. No weird optional fields. No `trackingCode: string | null` where you do not know if null means not shipped or an old order. The type is the documentation. Branded types This one was harder to learn but I cannot live without it. The problem is simple. `userId: string` and `productId: string` are the same type to TypeScript. They are not the same to your business. Mixing them is a bug. Without branded types TypeScript does not catch this error. function getUser(id: string) { /* ... */ } function getProduct(id: string) { /* ... */ } const productId = '123'; getUser(productId); // TypeScript says this is fine. It is wrong. With branded types you create unique types from primitives. type Brand<T, B> = T & { readonly __brand: B }; type UserId = Brand<string, 'UserId'>; type ProductId = Brand<stri
To view or add a comment, sign in
-
🚀 Just built a "Dual-Engine" AI application using Spring Boot & React! Recently, I’ve been diving deep into the Spring AI framework. I wanted to build an architecture that could handle the best of both worlds: Cloud AI for heavy lifting and Local AI for offline/private tasks. Here is what I put together: 1️⃣ The Backend (Java/Spring Boot): Integrated both Google Gemini 2.5 Flash and a local Ollama model (Gemma 2) running side-by-side in the same application. 2️⃣ The Frontend (React): Built an interactive dashboard to send a single prompt to multiple LLMs simultaneously and "race" their responses in real-time. 💡 My biggest technical takeaway: Solving the "Two Brains" Problem. When you import multiple AI starters into a Spring Boot pom.xml, Spring’s AutoConfiguration can get confused about which ChatModel bean to inject into your controllers. The solution? Using Spring's @Qualifier annotation. By explicitly naming the beans (@Qualifier("ollamaChatModel") vs @Qualifier("googleGenAiChatModel")), I was able to safely route requests to completely different AI ecosystems from within the same API. It was a great exercise in managing Maven dependencies (and fighting the occasional Maven cache bug 😅) while building a truly flexible Generative AI wrapper. What is your preferred local LLM to run right now? Let me know below! 👇 #SpringBoot #Java #ReactJS #GenerativeAI #Ollama #GoogleGemini #SoftwareEngineering #WebDevelopment
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development