The Try-Catch Time Trap: Why Async Errors Escape? Lets look at this code that reads and parse the invalid JSON file. Sync code try { const data = fs.readFileSync("invalid.json", "utf-8"); const jsonData = JSON.parse(data); } catch (err) { console.log("error caught:", err.message); // catches parsing error } All good. --- Async code try { fs.readFile("invalid.json", "utf-8", (err, data) => { const jsonData = JSON.parse(data); }); } catch (err) { console.log("error caught:", err.message); // does NOT even run } Catch block won't execute. Now the question is… 👉 Why? --- Here’s how I started thinking about it: If JS finds an error → it stops execution → looks for a catch block in the current call stack → if not found, it bubbles up the stack --- In sync code: 👉 Everything runs in one continuous stack → error happens → catch block is right there → so it works --- But async changes things. When this line runs: fs.readFile(..., callback) 👉 JS does NOT execute the callback immediately Instead: → it registers the callback → pushes it to the event loop → and moves on --- Now important part 👇 👉 The current call stack finishes execution → which means try-catch is gone --- Later… 👉 when file reading is done → event loop pushes the callback to the call stack → callback runs But now: 👉 this is a new call stack And the old try-catch? 👉 already gone. --- So when error happens inside callback: ❌ there is no catch block anymore --- That’s when it clicked for me: 👉 try-catch works only within the same execution stack Not across time. Not across async boundaries. #JavaScript #Node #Programming #ErrorHandling #Interview #Eventloop #Callstack
Why Async Errors Escape Try-Catch Blocks in Node
More Relevant Posts
-
🎬 𝘖𝘯𝘦 𝘤𝘰𝘯𝘤𝘦𝘱𝘵. 𝘍𝘰𝘶𝘳 𝘧𝘰𝘳𝘮𝘴. 𝘐𝘯𝘧𝘪𝘯𝘪𝘵𝘦 𝘶𝘴𝘦 𝘤𝘢𝘴𝘦𝘴. You've seen what Streams are. Now let's meet each one properly. ━━━━━━━━━━━━━━━ 🧵 Node.js Streams 𝗧𝗵𝗲 𝟰 𝗙𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹 𝗦𝘁𝗿𝗲𝗮𝗺𝘀 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗱 ━━━━━━━━━━━━━━━ 1️⃣ 𝗥𝗲𝗮𝗱𝗮𝗯𝗹𝗲 𝗦𝘁𝗿𝗲𝗮𝗺 Data flows IN. You consume it chunk by chunk. Example → fs.createReadStream() Key events: → 𝗱𝗮𝘁𝗮 — fires every time a chunk arrives → 𝗲𝗻𝗱 — fires when there's nothing left to read → 𝗲𝗿𝗿𝗼𝗿 — fires if something goes wrong 2️⃣ 𝗪𝗿𝗶𝘁𝗮𝗯𝗹𝗲 𝗦𝘁𝗿𝗲𝗮𝗺 Data flows OUT. You push it somewhere. Example → fs.createWriteStream() Key functions: → 𝘄𝗿𝗶𝘁𝗲() — sends a chunk to the destination → 𝗲𝗻𝗱() — signals no more data will be written → 𝗳𝗶𝗻𝗶𝘀𝗵 event — fires when all data has been flushed 3️⃣ 𝗗𝘂𝗽𝗹𝗲𝘅 𝗦𝘁𝗿𝗲𝗮𝗺 Both Readable AND Writable at the same time. Example → net.Socket (TCP connections) A socket can receive data AND send data simultaneously. Two lanes. One stream. ⚡ 4️⃣ 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺 𝗦𝘁𝗿𝗲𝗮𝗺 Data comes in → gets modified → goes out. Example → zlib.createGzip() Reads raw data, compresses it on the fly, outputs compressed chunks. No need to load the full file. No memory spike. 🔥 Key function: → 𝗽𝗶𝗽𝗲() — chains streams together like a pipeline ━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗯𝗶𝗴 𝗽𝗶𝗰𝘁𝘂𝗿𝗲: ━━━━━━━━━━━━━━━ Readable → Transform → Writable That's a full streaming pipeline. Read a file → compress it → write it to disk. All in one chain. All chunk by chunk. All production grade. 🚀 Which of the 4 have you used in your projects? Drop it below. 👇 #NodeJS #BackendDevelopment #JavaScript #Streams #Developer
To view or add a comment, sign in
-
-
😅 I once changed a value in a “copied” object…... and somehow the original data changed too 💥 👉 That’s when I realized… I didn’t understand shallow vs deep copy properly. 🚀 Let’s break it down (this will save you from real bugs) 🧠 Why this matters In JavaScript, objects & arrays are reference types So copying them incorrectly = you might accidentally modify the original data 😬 📦 1. Shallow Copy A shallow copy only copies top-level values 👉 Nested objects are still shared (same reference) So: - Changing top-level → ✅ safe - Changing nested → 💥 affects original too ⚠️ The common mistake You think you created a new object… but deep inside, it’s still pointing to the same memory 😵 🔁 How to create shallow copy • Spread → {...obj} • Object.assign • Array methods → slice, Array.from 🔐 2. Deep Copy A deep copy creates a fully independent clone 👉 Every level is copied 👉 No shared references So: - Changing nested data → ✅ completely safe 🔁 How to create deep copy 👉 structuredClone() (Recommended) - Handles most data types - Modern & reliable 👉 JSON.parse(JSON.stringify()) - Quick but limited - loses functions, Dates, undefined 💡 Real Dev Insight Shallow copy is fast ⚡ Deep copy is safe 🛡️ 👉 Use shallow → for simple data 👉 Use deep → for nested structures 🚀 Final Thought: Most bugs don’t come from logic… 👉 They come from unexpected mutations Understand copying → write safer code 💪 #JavaScript #FrontendDevelopment #WebDevelopment #CodingTips #ShallowCopy #DeepCopy #LearnJavaScript #BuildInPublic #100DaysOfCode #LearnInPublic
To view or add a comment, sign in
-
-
My Redux Journey: From Classic Redux to RTK Query There was a time when nobody wanted to touch Redux. Around 2018 to 2020, the complaints were everywhere. Too much boilerplate. Action types, action creators, reducers, thunks, loading states, error states. All of that just to fetch some data. People started moving to Context, Zustand (which dropped in 2019), anything that felt lighter. I get it. I was tired of it too. Then in 2019, Redux Toolkit showed up and changed the conversation. createSlice cleaned up the mess. createAsyncThunk made async logic readable again. By April 2022, the classic createStore API was officially deprecated, and RTK became the official recommended way to write Redux. But the real shift for me was RTK Query, which shipped in June 2021 as part of Redux Toolkit 1.6. Data fetching, caching, re-fetching, invalidation, all handled. I stopped writing the same loading and error logic in every project and started focusing on the actual product. Where it pays off in real projects: Refactored older Redux code and removed a big chunk of repetitive store logic Replaced custom fetch hooks with RTK Query and got caching for free Used tags to keep lists and detail views consistent after mutations The lesson for me is simple. Good tools don't just save time, they free up your brain to think about the actual problem you're solving. If you wrote off Redux a few years ago, it might be worth another look. It's not the same library anymore. #React #Redux #ReduxToolkit #RTKQuery #TypeScript #FrontEnd #JavaScript #WebDevelopment #SoftwareEngineering #CleanCode
To view or add a comment, sign in
-
-
If you have UserTable, ProductTable, and OrderTable that differ only in their data type — you've written the same component three times. TypeScript generics can collapse all three into one: // Three copies, three maintenance points const UserTable = ({ users }: { users: User[] }) => ... const ProductTable = ({ products }: { products: Product[] }) => ... const OrderTable = ({ orders }: { orders: Order[] }) => ... // One generic component function Table<T>({ data, columns }: { data: T[]; columns: ColumnDef<T>[]; }) { return data.map((row, i) => ( <Row key={i} row={row} columns={columns} /> )); } Same component, full type inference. TypeScript infers T from the data prop — you never write the type argument explicitly. If a column accessor references a field that doesn't exist on T, TypeScript catches it at compile time, not at runtime. This is how TanStack Table v8 is built — the core Table<TData> type carries the row data type through columns, rows, cells, and sorting logic. Every accessor is type-checked against TData automatically. One syntax note for TSX files: arrow function generics need a trailing comma to avoid JSX parser ambiguity: const Component = <T,>({ data }: { data: T }) => ... When this doesn't apply: • One-off components that won't be reused — generics add cognitive overhead • Components that differ in behavior, not just type — composition handles that better • Teams new to TypeScript generics — make the duplication visible first, then extract Do you have component files that look suspiciously identical except for their prop types? #TypeScript #ReactDevelopment #JavaScript #FrontendEngineering #Generics
To view or add a comment, sign in
-
-
A small JavaScript check turned into a good lesson this week. I initially had a validation like: 𝙭 !== 𝙣𝙪𝙡𝙡 && 𝙭 !== '' It looked correct — until edge cases started slipping through. In some scenarios, x was undefined, and the condition still passed. The fix was simple: 𝙭 != 𝙣𝙪𝙡𝙡 && 𝙭 !== '' Using != null intentionally to cover both null and undefined. But the bigger takeaway wasn’t the syntax. In backend systems, especially when dealing with external or loosely structured data, assumptions about input shape can quietly break things. You don’t just get clean values — you get missing fields, partial payloads, and inconsistent states. This bug was a reminder to: • Define stricter input boundaries • Normalize data early • Avoid relying on implicit assumptions Sometimes a small condition reveals a larger gap in how we think about data contracts. #BackendDevelopment
To view or add a comment, sign in
-
🚀 Just shipped Developer Tools Hub v3.0.0 — 30 free browser-based developer tools, zero sign-up required. After months of work, I'm excited to share the biggest update to Developer Tools Hub — an open-source collection of utilities every developer reaches for daily. What's new in v3.0.0: → Monaco Editor (the engine behind VS Code) now powers 17 tools with syntax highlighting, autocomplete, and theme-aware editing → 12 new tools including SQL Formatter, Diff & Merge, Markdown Editor, Code Playground (28+ languages), cURL Client, UUID Generator, Cron Builder, and more → Convex backend for real-time feedback and analytics → 30 tools total — all running 100% in your browser. No data leaves your machine. Tech stack: Next.js 16 · React 19 · TypeScript 6 · Tailwind CSS v4 · pnpm Some tools you might find useful: 🔧 JSON Formatter & Validator 🔧 JWT Decoder 🔧 Base64 Encoder/Decoder 🔧 Regex Tester 🔧 Color Converter & Gradient Generator 🔧 Code Playground (JS, Python, Rust, Go, C++, and 23 more) 🔧 cURL Client with code generation Everything is open-source, WCAG 2.1 AA accessible, and works offline. Check it out: https://lnkd.in/dNRa5nQV What developer tool do you wish existed? Drop it in the comments — I might build it next. 👇 #WebDevelopment #OpenSource #DeveloperTools #JavaScript #TypeScript #NextJS #ReactJS #SoftwareEngineering #FrontendDevelopment #coding
To view or add a comment, sign in
-
Your FastAPI backend is fast to build. But is it fast to run? Most developers find out the answer at the worst possible moment when real users hit it at the same time. Endpoints slow down. Requests pile up. Users drop off. Not because the code is wrong. Because it is blocking. Here is what blocking actually looks like in production: Your user hits an endpoint. FastAPI calls the database. That query takes 200ms. During those 200ms your server is frozen. Not slow. Frozen. Every other request sits in a queue waiting for that one query to finish. 100 users hit your API at the same time. User 1 gets served. Users 2 to 100 wait in line. That is sync. That is blocking I/O. FastAPI was built to never work that way. With async/await while your database query runs in the background, your server is already picking up the next request. And the next. And the next. 200ms of database wait becomes invisible to every other user. In real backend terms. SYNC — blocks: def get_orders(user_id: int): return db.query(user_id) ASYNC — non blocking: async def get_orders(user_id: int): return await db.query(user_id) Same logic. Same database. Same server. But now 100 users get served in the time it used to take to serve 1. This matters even more when your endpoints call external services. 1. Payment gateway 300ms wait. 2. AI model response 2 to 3 seconds wait. 3. Email service 500ms wait. Sync every user feels every millisecond of every one of those waits. with Async none of them do. FastAPI gives you non-blocking I/O natively. No extra setup. No plugins. No workarounds. Just write async. Add await. Let FastAPI handle the rest. Your backend was already fast to build. Now make it fast to run. Are you using async endpoints in your FastAPI projects? 👇 #FastAPI #Python #BackendDevelopment #AsyncProgramming #SoftwareEngineering #APIDesign #PythonDeveloper #WebDevelopment #TechIn2026 #BuildInPublic
To view or add a comment, sign in
-
-
hi connections Day 25 of 30: Efficient Data Merging with LeetCode 2722 🚀 Today’s challenge, Join Two Arrays by ID, perfectly mirrors a common task in full-stack development: combining data from different API endpoints or database tables into a single, unified list. The Strategy The key to solving this efficiently is avoiding nested loops (which would lead to slow O(n^2) performance). Instead, I used a Hash Map (Object) approach: Map Creation: Store all items from the first array in an object using their id as the key. Smart Overriding: Iterate through the second array. If an id already exists, use the Spread Operator (...) to merge the objects, ensuring the second array's data takes priority. Final Sorting: Convert the object back into an array and sort it by id to meet the output requirements. Why It Matters This logic is the foundation of modern state management. Whether you're merging updated user profiles in a React frontend or joining related documents in MongoDB, understanding how to handle property overrides and unique identifiers is essential. Using a Map keeps the performance at O(n \log n), making it capable of handling large-scale data without lagging the application. Only 5 days left! The momentum is stronger than ever. 💻✨ #JavaScript #LeetCode
To view or add a comment, sign in
-
-
hi connections Day 23 of 30: Advanced Data Aggregation with LeetCode 2631 🚀 Today’s challenge, Group By, is a powerful exercise in data transformation. It involves extending the Array.prototype to categorize elements based on a callback function—a feature so useful it's often a staple in libraries like Lodash and even recently added to the official JavaScript specification (Object.groupBy). The Logic The goal is to take an array and turn it into an object. The callback function fn acts as a "sorter," deciding which "bucket" (key) each element belongs to. How it Works: Iterate: Use .reduce() or a loop to go through every item in the array. Key Generation: Apply the function fn(item) to determine the group key. Aggregation: Check if the key already exists in your result object. If not, initialize it with an empty array. Then, push the item into that array. Why This is a Game Changer: In real-world development, especially when working with APIs, you often need to group data—like grouping products by category, users by role, or logs by date. Mastering this logic allows you to handle complex data structures with ease and write much cleaner code. As I move closer to the final week of this 30-day journey, the pieces of the JavaScript puzzle are truly coming together! 💻✨ #JavaScript #LeetCode
To view or add a comment, sign in
-
-
🚀 JSON vs TOON (Token-Oriented Object Notation): What’s the Difference? With evolving data formats, it’s easy to confuse similar-sounding terms. Here’s a clear breakdown 👇 🔹 JSON (JavaScript Object Notation) A widely adopted data interchange format used across web and APIs. ✔️ Human-readable ✔️ Key-value structure ✔️ Easy to parse and debug ✔️ Ideal for web apps and REST APIs 🔹 TOON (Token-Oriented Object Notation) A newer, performance-focused data representation approach. ✔️ Token-based structure instead of plain text ✔️ More compact and efficient ✔️ Faster parsing in some implementations ✔️ Designed for high-performance systems and data-heavy environments 💡 Key Differences: ▪️ JSON is text-based & human-friendly ▪️ TOON is token-based & optimized for machines ▪️ JSON prioritizes readability & simplicity ▪️ TOON prioritizes performance & efficiency 👉 In short: JSON is great for communication between systems TOON is built for speed and optimized processing #TechComparison #JSON #DataFormats #BackendDevelopment #TechTrends
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development