🛑 Stop Letting Uncaught Exceptions Crash Your Node.js Server Writing clean code is good. Writing resilient, production-ready code is what separates a mid-level developer from a senior one. If you're still wrapping every controller with repetitive try-catch blocks, it might be time to adopt Global Error Handling in your Node.js + TypeScript applications. In my recent backend projects, switching to a centralized error management system was a complete game-changer. Here’s why: ✅ Cleaner Codebase (DRY Principle) No more repetitive try-catch blocks cluttering your business logic. Controllers stay lean, readable, and focused on what truly matters. ✅ Consistent API Responses Whether it’s a 404, 400 validation error, or 500 internal issue — every error follows a standardized JSON structure. Frontend developers will thank you. ✅ Type Safety with Custom Errors Using a custom AppError class in TypeScript enforces structured error handling. Debugging becomes faster and more predictable. ✅ Improved Security Stack traces are hidden in production while still available in development. Clean for users. Detailed for developers. Here’s a simplified implementation: // 1️⃣ Custom Error Class class AppError extends Error { constructor(public message: string, public statusCode: number) { super(message); this.statusCode = statusCode; Error.captureStackTrace(this, this.constructor); } } // 2️⃣ Centralized Global Error Middleware import { Request, Response, NextFunction } from 'express'; export const globalErrorHandler = ( err: any, req: Request, res: Response, next: NextFunction ) => { const statusCode = err.statusCode || 500; res.status(statusCode).json({ status: 'error', message: err.message || 'Internal Server Error', stack: process.env.NODE_ENV === 'development' ? err.stack : undefined, }); }; // 3️⃣ Register Middleware app.use(globalErrorHandler); Architecture matters. Graceful error handling isn’t just about preventing crashes- it’s about building reliable, scalable, and production-grade systems that users can trust. #NodeJS #TypeScript #BackendDevelopment #CleanCode #SoftwareEngineering #ExpressJS #WebDevelopment #Programming #TechCommunity
Implementing Global Error Handling in Node.js with TypeScript
More Relevant Posts
-
⚡ 𝗡𝗢𝗗𝗘.𝗝𝗦 · 𝗔𝗦𝗬𝗡𝗖 Most 𝗡𝗼𝗱𝗲.𝗷𝘀 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀 𝗮𝗿𝗲 𝘄𝗿𝗶𝘁𝗶𝗻𝗴 𝗮𝘀𝘆𝗻𝗰 𝗰𝗼𝗱𝗲 𝘄𝗿𝗼𝗻𝗴. And the performance loss is bigger than most people realize. Here’s the mistake I see 𝘢𝘭𝘭 𝘵𝘩𝘦 𝘵𝘪𝘮𝘦 in backend codebases: 𝚏𝚘𝚛 (𝚌𝚘𝚗𝚜𝚝 𝚞𝚜𝚎𝚛 𝚘𝚏 𝚞𝚜𝚎𝚛𝚜) { 𝚊𝚠𝚊𝚒𝚝 𝚜𝚎𝚗𝚍𝙴𝚖𝚊𝚒𝚕(𝚞𝚜𝚎𝚛); // 𝚂𝚎𝚚𝚞𝚎𝚗𝚝𝚒𝚊𝚕 𝚎𝚡𝚎𝚌𝚞𝚝𝚒𝚘𝚗 ❌ } At first glance it looks fine. But this code sends emails 𝗼𝗻𝗲 𝗯𝘆 𝗼𝗻𝗲. Which means 𝗲𝘃𝗲𝗿𝘆 𝗿𝗲𝗾𝘂𝗲𝘀𝘁 𝘄𝗮𝗶𝘁𝘀 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗽𝗿𝗲𝘃𝗶𝗼𝘂𝘀 𝗼𝗻𝗲 𝘁𝗼 𝗳𝗶𝗻𝗶𝘀𝗵. In production systems, this can 𝗱𝗲𝘀𝘁𝗿𝗼𝘆 𝘁𝗵𝗿𝗼𝘂𝗴𝗵𝗽𝘂𝘁. --- ### 🚀 The Fix Run independent async operations 𝗶𝗻 𝗽𝗮𝗿𝗮𝗹𝗹𝗲𝗹: ` 𝚊𝚠𝚊𝚒𝚝 𝙿𝚛𝚘𝚖𝚒𝚜𝚎.𝚊𝚕𝚕(𝚞𝚜𝚎𝚛𝚜.𝚖𝚊𝚙(𝚞𝚜𝚎𝚛 => 𝚜𝚎𝚗𝚍𝙴𝚖𝚊𝚒𝚕(𝚞𝚜𝚎𝚛))); // 𝙿𝚊𝚛𝚊𝚕𝚕𝚎𝚕 𝚎𝚡𝚎𝚌𝚞𝚝𝚒𝚘𝚗 ✅ Now Node.js processes them 𝗰𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝘁𝗹𝘆, dramatically improving performance. --- ### ⚠️ Other Async Killers I See in Production ❌ 𝗨𝗻𝗵𝗮𝗻𝗱𝗹𝗲𝗱 𝗽𝗿𝗼𝗺𝗶𝘀𝗲 𝗿𝗲𝗷𝗲𝗰𝘁𝗶𝗼𝗻𝘀 → Silent crashes or unstable services ❌ 𝗠𝗶𝘀𝘀𝗶𝗻𝗴 𝘁𝗿𝘆/𝗰𝗮𝘁𝗰𝗵 𝗶𝗻 𝗮𝘀𝘆𝗻𝗰 𝗳𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀 → Errors escape and break request flows ❌ 𝗕𝗹𝗼𝗰𝗸𝗶𝗻𝗴 𝘁𝗵𝗲 𝗲𝘃𝗲𝗻𝘁 𝗹𝗼𝗼𝗽 𝘄𝗶𝘁𝗵 𝘀𝘆𝗻𝗰 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝘀 → CPU-heavy tasks freeze your API --- ### 📈 The Impact Sometimes a 𝘀𝗶𝗻𝗴𝗹𝗲 𝗮𝘀𝘆𝗻𝗰 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 can: ✅ Increase throughput 𝟭𝟬𝘅 ✅ Reduce API latency dramatically ✅ Improve scalability 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗮𝗱𝗱𝗶𝗻𝗴 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 --- 💡 𝗞𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆 Before scaling servers… 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗲 𝗵𝗼𝘄 𝘆𝗼𝘂𝗿 𝗮𝘀𝘆𝗻𝗰 𝗰𝗼𝗱𝗲 𝗿𝘂𝗻𝘀. Node.js performance often comes down to 𝗵𝗼𝘄 𝘄𝗲𝗹𝗹 𝘆𝗼𝘂 𝘂𝘀𝗲 𝘁𝗵𝗲 𝗲𝘃𝗲𝗻𝘁 𝗹𝗼𝗼𝗽. --- 🔄 𝗖𝘂𝗿𝗶𝗼𝘂𝘀 𝘁𝗼 𝗵𝗲𝗮𝗿 𝗳𝗿𝗼𝗺 𝗼𝘁𝗵𝗲𝗿 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀: What’s the 𝗺𝗼𝘀𝘁 𝘀𝘂𝗿𝗽𝗿𝗶𝘀𝗶𝗻𝗴 𝗡𝗼𝗱𝗲.𝗷𝘀 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗶𝘀𝘀𝘂𝗲 you've found in production? #NodeJS #BackendEngineering #AsyncProgramming #JavaScript #SoftwareEngineering #ScalableSystems
To view or add a comment, sign in
-
-
Most Node.js Developers Lose Performance Without Realizing It… When building APIs, we often write code that works but not always code that scales. A common mistake I still see in real projects is making sequential API calls using multiple await statements. It looks clean. It feels logical. But performance silently suffers. 👉 Example: Fetching user data, orders, and payments one by one can take 300ms+ total response time. Now imagine the same logic using Promise.all() All requests run in parallel, and suddenly your response time drops close to 100ms. That’s nearly 3x faster performance with just one small change. In high-traffic applications, this simple optimization can lead to: ✅ Better API responsiveness ✅ Higher throughput under load ✅ Improved user experience ✅ More scalable backend architecture Performance is not only about writing complex code. Sometimes it’s about writing smarter async logic. 💡 If you are working with Node.js APIs, start reviewing where you can safely run operations in parallel. Are you already using Promise.all() in production projects or still relying on sequential awaits? Let’s discuss in comments 👇 Sharing real-world experiences helps everyone grow. #NodeJS #JavaScript #BackendDevelopment #WebPerformance #APIDesign #AsyncProgramming #SoftwareEngineering #FullStackDeveloper
To view or add a comment, sign in
-
-
🚀 Node.js Performance Optimization Tip A common inefficiency I still see in many codebases is handling independent API calls sequentially. ❌ Sequential Execution Each request waits for the previous one to complete, increasing total response time unnecessarily: const user = await getUser(); const orders = await getOrders(); const payments = await getPayments(); If each call takes ~100ms, the total latency becomes ~300ms. ✅ Parallel Execution with Promise.all() When operations are independent, they should be executed concurrently: const [user, orders, payments] = await Promise.all([ getUser(), getOrders(), getPayments() ]); This reduces total latency to ~100ms, significantly improving performance. ⚡ Key Takeaway: Small architectural decisions in asynchronous handling can lead to substantial performance gains, especially at scale #NodeJS #JavaScript #BackendEngineering #SoftwareEngineering #PerformanceOptimization #AsyncProgramming #Concurrency #ScalableSystems #CleanCode #CodeOptimization #SystemDesign #APIDevelopment #WebDevelopment #ServerSide #EngineeringBestPractices #HighPerformance #TechArchitecture #DeveloperTips #ProgrammingBestPractices #ModernJavaScript
To view or add a comment, sign in
-
-
🚀 Node.js Performance Optimization Tip A common inefficiency I still see in many codebases is handling independent API calls sequentially. ❌ Sequential Execution Each request waits for the previous one to complete, increasing total response time unnecessarily: const user = await getUser(); const orders = await getOrders(); const payments = await getPayments(); If each call takes ~100ms, the total latency becomes ~300ms. ✅ Parallel Execution with Promise.all() When operations are independent, they should be executed concurrently: const [user, orders, payments] = await Promise.all([ getUser(), getOrders(), getPayments() ]); This reduces total latency to ~100ms, significantly improving performance. ⚡ Key Takeaway: Small architectural decisions in asynchronous handling can lead to substantial performance gains, especially at scale #NodeJS #JavaScript #BackendEngineering #SoftwareEngineering #PerformanceOptimization #AsyncProgramming #Concurrency #ScalableSystems #CleanCode #CodeOptimization #SystemDesign #APIDevelopment #WebDevelopment #ServerSide #EngineeringBestPractices #HighPerformance #TechArchitecture #DeveloperTips #ProgrammingBestPractices #ModernJavaScript
To view or add a comment, sign in
-
-
🚀 Most Node.js Developers Lose Performance Without Realizing It… When building APIs, we often write code that works but not always code that scales. A common mistake I still see in real projects is making sequential API calls using multiple await statements. It looks clean. It feels logical. But performance silently suffers. 👉 Example: Fetching user data, orders, and payments one by one can take 300ms+ total response time. Now imagine the same logic using Promise.all() All requests run in parallel, and suddenly your response time drops close to 100ms. That’s nearly 3x faster performance with just one small change. In high-traffic applications, this simple optimization can lead to: ✅ Better API responsiveness ✅ Higher throughput under load ✅ Improved user experience ✅ More scalable backend architecture Performance is not only about writing complex code. Sometimes it’s about writing smarter async logic. 💡 If you are working with Node.js APIs, start reviewing where you can safely run operations in parallel. Are you already using Promise.all() in production projects or still relying on sequential awaits? Let’s discuss in comments 👇 Sharing real-world experiences helps everyone grow. #NodeJS #JavaScript #BackendDevelopment #WebPerformance #APIDesign #AsyncProgramming #SoftwareEngineering #FullStackDeveloper
To view or add a comment, sign in
-
-
⚛️ Most React codebases don’t fail because of bad logic. They fail because of bad structure. Over time, components grow. State spreads everywhere. And the codebase becomes harder and harder to maintain. Here are 8 React best practices I apply on every project: 1️⃣ One component = one responsibility If a component fetches data, manages logic, and renders UI… …it’s doing too much. Split it. 2️⃣ Use custom hooks for business logic Logic belongs in hooks. Components should stay focused on rendering. Examples: "useAuth()" "useFetchOrders()" "useDebounce()" 3️⃣ Structure your project by feature Instead of a giant "/components" folder… Use feature-based architecture. Each feature owns its: • components • hooks • services • tests 4️⃣ Keep state local first Not everything needs Redux or Zustand. "useState" and "useContext" solve most cases. Global state should be the exception. 5️⃣ Memoize with intention "useMemo" and "useCallback" are not magic. Use them when you measure a real performance issue. Not “just in case”. 6️⃣ TypeScript is non-negotiable Props without types = bugs waiting to happen. Interfaces, generics, and strict mode prevent hours of debugging. 7️⃣ Prefer early returns over nested ternaries Readability wins. If your JSX has multiple ternary levels, refactor it. 8️⃣ Tests are part of the product • Unit tests for hooks • Integration tests for flows • Cypress / Playwright for critical paths Shipping without tests is shipping with hope. Hope is not a strategy. These practices aren’t “nice to have”. They’re what separate a React developer from a React engineer. 💬 What’s the React best practice your team never compromises on? #React #Frontend #JavaScript #TypeScript #WebDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 Node.js Performance Tip Most Developers Still Ignore If your API feels slow, there’s a high chance you’re making this common mistake 👇 ❌ Sequential API Calls Running async operations one by one increases total response time unnecessarily. const user = await getUser(); const orders = await getOrders(); const payments = await getPayments(); ⏱️ If each call takes 100ms → Total = 300ms ⸻ ✅ Optimized Approach: Promise.all() const [user, orders, payments] = await Promise.all([ getUser(), getOrders(), getPayments() ]); ⚡ Now all requests run in parallel ⏱️ Total time ≈ 100ms ⸻ 💡 Key Rule: If your API calls are independent, NEVER run them sequentially. ⚠️ Use Promise.all() only when: ✔️ No dependency between requests ✔️ You can handle failures properly ⸻ 🔥 Why this matters: • Faster APIs = Better user experience • Better performance = Higher scalability • Small optimization = Big impact ⸻ 💬 Want more backend performance tips like this? Comment “MORE” 👇 #NodeJS #JavaScript #BackendDevelopment #WebPerformance #FullStackDeveloper #SoftwareEngineering #APIDevelopment #CodingTips #Developers #TechTips #MERNStack #PerformanceOptimization
To view or add a comment, sign in
-
-
Just shipped CompileX – Cloud Code Execution Platform 💻⚡ A multi-language online compiler that allows users to write and execute code in real time directly from the browser. 🌐 Live Demo: https://lnkd.in/gdTKCSzq 🛩️Git hub:https://lnkd.in/gBCzPuKX 🛠 Tech Stack Used Frontend React (Vite) Monaco Editor (@monaco-editor/react) Tailwind CSS Axios Backend Node.js (Vercel Serverless Functions) Judge0 REST API (Code Execution Engine) Deployment Vercel (Free Tier) GitHub (Version Control) ⚙ Features ✔ Multi-language support (C++, Java, Python, JavaScript) ✔ Real-time code execution ✔ VS Code-like editor experience ✔ Input & Output console ✔ Error handling & structured output ✔ Fully responsive design ✔ Serverless cloud architecture 🧠 What I Learned Designing secure code execution workflows Integrating third-party APIs (Judge0) Serverless backend architecture Optimizing frontend performance with React Deploying production-ready apps on Vercel This project strengthened my understanding of full-stack development and cloud-based architecture. Always building. Always learning. 🚀 #React #NodeJS #FullStack #WebDevelopment #Vercel #JavaScript #Developer #BuildInPublic #Projects
To view or add a comment, sign in
-
Most backend developers write code that works. Few write code that scales. Here's what separates good backend from great backend 👇 1. Don't store what you can compute Every extra column = extra storage + extra sync headache. Keep your DB lean. 2. Index what you query, not everything Over-indexing slows down writes. Under-indexing kills reads. Know the difference. 3. Never trust user input Validate at the API layer. Always. No exceptions. SQL injection doesn't care how smart you are. 4. Async everything you can afford to Sending an email after signup? Don't make the user wait. Queue it. 5. Log errors, not noise Console.log everywhere = finding nothing when it matters. Use structured logs with levels. 6. Cache aggressively, invalidate carefully Caching is easy. Knowing when to bust the cache — that's the real skill. 7. Your API response should tell a story Status codes, error messages, consistent structure. Your frontend team will thank you. The best backends are boring in production. That's the goal. Save this. Your future self at 2AM debugging will thank you. #Backend #WebDevelopment #NodeJS #SoftwareEngineering #APIDevelopment #RESTAPI #farhanfaqir
To view or add a comment, sign in
-
-
Silent performance killers often hide in plain sight. We recently diagnosed a critical issue where nearly 20% of our production bundle size was comprised of 'ghost' code – unused imports and dead exports bloating our Next.js application. This wasn't just an aesthetic concern; it directly translated to slower page load times, increased data transfer costs, and a larger memory footprint for clients. The root cause was simple developer oversight compounded by lack of guardrails. Our fix involved a two-pronged approach. First, we strictly enforced ESLint's `no-unused-vars` rule to `error` level across our MERN stack. This catches unused imports at commit or CI/CD, preventing them from ever reaching production. Second, we verified and optimized our build configuration to fully leverage tree-shaking. Modern bundlers like Webpack and Vite, especially within Next.js 15, are powerful, but they require proper setup for dead code elimination to be truly effective. Ensuring `package.json` `sideEffects` property is correctly configured and modules are written with ES Modules syntax is crucial. The result? A significant reduction in bundle size, noticeable improvements in application performance, and a streamlined developer workflow. This isn't about chasing micro-optimizations; it's about foundational engineering hygiene that directly impacts user experience and operational costs. Proactive code quality, enforced by automation, pays dividends far beyond the initial setup effort. #SoftwareEngineering #WebDevelopment #PerformanceOptimization #FrontendDevelopment #BackendDevelopment #Nextjs #Nodejs #MERNStack #ESLint #TreeShaking #Webpack #Vite #CodeQuality #DeveloperTools #TechLeadership #CTO #Founders #Scalability #EngineeringCulture #DevOps #Automation #AIAutomation #SoftwareArchitecture #TechStrategy #DigitalTransformation
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
The transition from repetitive try-catch to centralized error handling is one of those shifts that pays off quietly, Abdullah. One thing worth adding - custom error classes become even more powerful when you layer in error categorization (retriable vs fatal), which helps the upstream orchestration decide whether to retry or bail. Have you experimented with that pattern in your middleware?