Most React devs handle async state with a handful of booleans. isLoading, isError, data, error. All separate. All synchronized manually. The problem: you can end up in impossible states. isLoading: true AND data present at the same time? Technically possible, logically wrong. A better pattern: discriminated unions in TypeScript. type AsyncState<T> = | { status: "idle" } | { status: "loading" } | { status: "success"; data: T } | { status: "error"; error: string } Now your component switches on state.status. TypeScript narrows the type automatically. No impossible states, no guard clauses scattered everywhere, no "why is data undefined when isLoading is false?" Small shift in thinking, big improvement in code clarity. Once you start modeling state as "what combinations can actually exist," your components get dramatically cleaner. What pattern do you use for async state in React? Still on separate booleans, or have you moved to something like this? #TypeScript #React #WebDevelopment
Radu Catalin-Andrei’s Post
More Relevant Posts
-
If your TypeScript type has three optional fields that are "never all set at the same time" — that's not a type, that's a verbal agreement. { data?: User; error?: Error; loading?: boolean } Three optional fields allow 8 possible combinations. Only 3 are valid: loading, success, or error. TypeScript cannot catch the other 5 because you never described what valid looks like. // Optional fields — 8 states, 5 are invalid type AsyncState = { data?: User; error?: Error; loading?: boolean; }; // loading + data? Valid TypeScript. Runtime bug. // error + data? Valid TypeScript. Undefined behavior. A discriminated union cuts this to exactly the states you intend: type AsyncState = | { status: 'idle' } | { status: 'loading' } | { status: 'success'; data: User } | { status: 'error'; error: Error }; Now TypeScript knows: if status === 'success', data exists. In the loading branch, accessing data is a compile error. Every switch is exhaustive-checked automatically. This pattern predates TypeScript. Richard Feldman's 2016 Elm talk "Making Impossible States Impossible" named the principle. XState, Redux Toolkit, and React Query all encode state as discriminated unions internally for exactly this reason. When this doesn't apply: • Simple on/off boolean flags — a single boolean is not an "impossible state" problem • React Query's useQuery already returns a discriminated shape — don't rewrap it • Config objects where fields are genuinely independent of each other The "60% fewer runtime errors" stat that circulates online is unsourced. The real benefit is compile-time exhaustiveness checking — TypeScript tells you which cases you haven't handled before you ship. Are you modeling async state with optional fields, or with unions that make invalid states impossible to represent? #TypeScript #TypeSafety #ReactDevelopment #JavaScript #SoftwareEngineering
To view or add a comment, sign in
-
-
Most React devs handle API state like this: isLoading, isError, data, three separate booleans that can contradict each other. Loading = true AND error = true at the same time? Shouldn't happen, but nothing prevents it. Discriminated unions fix this cleanly: type AsyncState<T> = | { status: 'idle' } | { status: 'loading' } | { status: 'success'; data: T } | { status: 'error'; error: Error } Now TypeScript enforces that you handle each case. Inside the 'success' branch, state.data is always defined. No optional chaining, no null checks. The real win: when you switch on state.status, TypeScript narrows automatically. No more wondering "is data null because it's loading, or because it failed?" It's a small change that eliminates a whole class of bugs and makes component logic self-documenting. What state patterns are you reaching for in your React projects right now? #TypeScript #React #WebDevelopment
To view or add a comment, sign in
-
TypeScript used naively adds syntax. Used correctly, it prevents entire bug classes. Here are the patterns that actually matter in production. ── Discriminated Unions ── Stop using optional fields for state that has clear phases. Instead of: { data?: User; error?: string; loading?: boolean } Use: → { status: 'loading' } → { status: 'success'; data: User } → { status: 'error'; error: string } TypeScript now narrows correctly in every branch. No more 'data might be undefined' checks scattered everywhere. ── The satisfies Operator (TS 4.9+) ── Validates a value against a type without widening it. You keep autocomplete on specific keys. You get the type safety check. Best of both worlds. ── Template Literal Types ── Generate all valid string combinations at compile time. type ApiCall = `${HTTPMethod} ${Endpoint}` TypeScript tells you when you're calling an endpoint that doesn't exist. ── Branded Types ── Two strings that are semantically different: type UserId = string & { readonly __brand: 'UserId' } type PostId = string & { readonly __brand: 'PostId' } Now you can't accidentally pass a PostId where a UserId is expected. Even though both are just strings at runtime. ── unknown over any ── any disables the type checker entirely. unknown forces you to narrow before using the value. One creates bugs. The other prevents them. TypeScript's real value: Making impossible states unrepresentable at compile time. Not just adding type annotations. #TypeScript #Frontend #JavaScript #SoftwareEngineering #WebDevelopment
To view or add a comment, sign in
-
Discriminated unions are the most underrated TypeScript pattern. If you manage async state in React (loading, success, error), you're probably juggling separate booleans and handling impossible states. I learned this the hard way on a real project. Instead: type AsyncState<T> = | { status: 'idle' } | { status: 'loading' } | { status: 'error'; error: string } | { status: 'success'; data: T } Now TypeScript forces you to handle every case. Try accessing .data when status is 'loading'? Error. Caught bugs before code review. One pattern. Fewer runtime crashes. Better DX. Using this in production now and it's a game-changer for reliability. #TypeScript #React #Engineering
To view or add a comment, sign in
-
Node.js Worker Threads: True Multi-Threading for Your JavaScript Code Node.js is recognized for its efficient handling of I/O through the Event Loop and the UV_THREADPOOL for system-level tasks. However, when JavaScript code becomes the bottleneck, Worker Threads are essential. 🧠 Core Concepts: - Isolated Execution: Each worker operates in its own V8 instance with separate memory, functioning like lightweight parallel Node.js instances. - True Parallelism: Unlike the Event Loop, which is concurrent, Worker Threads enable parallel execution by utilizing multiple CPU cores. - Message-Based Communication: Workers communicate via postMessage(), ensuring no shared state by default, which reduces race conditions. 🛠 When to use them? - Avoid using Worker Threads for I/O, as the Event Loop is more efficient for that. Instead, utilize them for CPU-bound tasks that could otherwise "freeze" your app: - Heavy Math: Complex calculations or data science in JavaScript. - Data Parsing: Transforming large JSON or CSV files. - Image/Video: Processing buffers or generating reports. Key Takeaway: The Thread Pool manages system tasks, while Worker Threads enhance the performance of your JavaScript code. #NodeJS #Backend #Javascript #WebDev #SoftwareEngineering #Performance
To view or add a comment, sign in
-
-
Do you ever know exactly what type a dynamic object is, but TypeScript won’t trust you? 🤔Here is #Day 21of #TypescriptBeforeAngular with SANDEEP KUMAR(#IAM5K): Type Assertions (as) (The "Manual Override") Like when you are accessing a standard DOM element (ViewChild) in Angular, and TypeScript only sees it as a generic Element, so it won’t let you use the .value or .focus() properties. If you are just using any to "silence" these errors, you are disabling your safety net and creating a massive bug trap. When you are certain about a type that the compiler cannot know yet, you need the authorized manual override: Type Assertions (as). 1️⃣ What is a Type Assertion? Sometimes, based on the context, you know more about the shape of a value than #TypeScript can ever determine. Think of a Type Assertion as a "Trust Badge" or a "Manual Override Key" that you present to the compiler. You use the as keyword to tell TypeScript: "I promise you, I know exactly what I'm doing. From this line onward, please treat this generic bucket as this specific, detailed type." // --- ☠️ The DANGEROUS way (any) --- const myElementAny: any = document.getElementById('user-email'); // No help here. If 'myElementAny' is actually null, this crashes later: console.log(myElementAny.value); // --- 🟢 The SAFE way (assertion) --- const myInput = document.getElementById('user-email') as HTMLInputElement; // Now TypeScript knows this bucket has specific input powers: myInput.value = 'hello@example.com'; // ✅ Perfect autocomplete! 2️⃣ Significance in Angular: In modern Angular (especially for state management and DOM interaction), Type Assertions are essential when you are dealing with dynamic sources, like: DOM Elements: Working with document.getElementById or ViewChild. You are sure it’s an <input>, but TS only knows it is a generic Element. API Data: Data arriving from a legacy service or a third-party library that isn’t typed yet. By using as, you are being proactive and maintaining your type safety rather than just silencing the compiler with any. It is about building components that are not just typed, but truly truly safe. 💡 Beginner Tip: Think of as as an authorization override, but use it sparingly! If you are asserting types everywhere, it is a big warning sign that your underlying data models are incomplete and need more work. Assert only when you have 100% certainty from external knowledge (like knowing you own the DOM)! 👉 Do you use Type Assertions (as) most often for DOM elements, generic API data, or external library data? Let me know👇 Connect /Follow me (SANDEEP KUMAR) so you dont miss the next one #WebDev #CodingTips #NewDeveloper #Beginner #CleanCode #TypeAssertion #DOMManipulation #PredictableState #TypeSafety #Javascript #BTech
To view or add a comment, sign in
-
-
⚡ Shipped JavaScript & TypeScript support in AI-MR-Reviewer — a GitHub App that reviews your PRs the moment you open them. Inline comments on the diff. Clear severity levels. Zero setup. Built for teams who want fast, reliable feedback without heavy tooling. What lands in this release: ~18 focused JS/TS rules. 🔴 HIGH RISK — 7 rules == / != instead of === / !== Empty catch blocks (silent failures) eval() / new Function() usage innerHTML with dynamic content / dangerouslySetInnerHTML (XSS risk) setTimeout / setInterval with string arguments SQL injection patterns in .query() / .exec() (concat / template literals) Hardcoded secrets (API keys, tokens, passwords in code) 🟡 MID RISK — 7 rules console.log / debug statements in production var usage instead of let/const TypeScript any type usage // @ts-ignore / // @ts-nocheck Non-null assertions (!) on property access new Date() without timezone awareness Promises without .catch() or not awaited 🔵 LOW RISK — 4 rules TODO / FIXME comments left behind Numbered function names (handler2, doThing3) require() used inside ES modules Magic numbers outside constants Every rule is tuned to minimize noise and maximize real signal — so developers focus only on what actually matters. Under the hood: TypeScript · Node.js · Octokit · Express · Docker Reviews land within seconds of opening a PR. Next up: deeper semantic checks on top of this — smarter detection, fewer false positives, and richer context. If your team works with JS/TS daily, this removes friction from every PR. 🚀 Launching publicly by this Sunday, InshaAllah. #JavaScript #TypeScript #CodeReview #DeveloperTools #AI #StaticAnalysis #DevEx #OpenSource
To view or add a comment, sign in
-
-
#Day22 Yesterday, my code learned how to talk to my computer. Today, it learned how to flow. Working with streams in Node.js changed how I think about handling data. Before this, reading a file meant loading everything into memory at once. Simple… until the file isn’t small anymore. Then I discovered streams. => ReadStream: Instead of swallowing the whole file, it reads in chunks. Like sipping, not gulping. => WriteStream: Outputs data piece by piece, perfect for logs or large files. => Pipe: This one clicked instantly. Connect a read stream to a write stream, and data just flows automatically. No manual handling, no stress. It feels less like executing code… and more like building a system where data moves. The biggest shift? I’m no longer thinking in terms of “files”, I’m thinking in terms of flow and efficiency. Small change in concept. Massive difference in scalability. Same language. Smarter systems. #NodeJS #JavaScript #BackendDevelopment #LearningToCode #M4ACELearningChallenge
To view or add a comment, sign in
-
-
𝗛𝗼𝘄 𝗚𝗮𝗿𝗯𝗮𝗴𝗲 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝗪𝗼𝗿𝗸𝘀 𝗶𝗻 𝗡𝗼𝗱𝗲.𝗷𝘀 As developers, we often focus on writing efficient code—but what about memory management behind the scenes? In 𝗡𝗼𝗱𝗲.𝗷𝘀, garbage collection (GC) is handled automatically by the 𝗩𝟴 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 𝗲𝗻𝗴𝗶𝗻𝗲, so you don’t need to manually free memory like in languages such as C or C++. But understanding how it works can help you write more optimized and scalable applications. 𝗞𝗲𝘆 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝘀: 𝟭. 𝗠𝗲𝗺𝗼𝗿𝘆 𝗔𝗹𝗹𝗼𝗰𝗮𝘁𝗶𝗼𝗻 Whenever you create variables, objects, or functions, memory is allocated in two main areas: Stack→ Stores primitive values and references Heap→ Stores objects and complex data 𝟮. 𝗚𝗮𝗿𝗯𝗮𝗴𝗲 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻 (𝗠𝗮𝗿𝗸-𝗮𝗻𝗱-𝗦𝘄𝗲𝗲𝗽) V8 uses a technique called Mark-and-Sweep: * It starts from “root” objects (global scope) * Marks all reachable objects * Unreachable objects are considered garbage * Then, it sweeps (removes) them from memory 𝟯. 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗚𝗮𝗿𝗯𝗮𝗴𝗲 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻 Not all objects live the same lifespan: Young Generation (New Space) → Short-lived objects Old Generation (Old Space) → Long-lived objects Objects that survive multiple GC cycles get promoted to the Old Generation. 𝟰. 𝗠𝗶𝗻𝗼𝗿 & 𝗠𝗮𝗷𝗼𝗿 𝗚𝗖 Minor GC (Scavenge)→ Fast cleanup of short-lived objects Major GC (Mark-Sweep / Mark-Compact) → Handles long-lived objects but is more expensive 𝟱. 𝗦𝘁𝗼𝗽-𝘁𝗵𝗲-𝗪𝗼𝗿𝗹𝗱 During GC, execution pauses briefly. Modern V8 minimizes this with optimizations like incremental and concurrent GC. 𝗖𝗼𝗺𝗺𝗼𝗻 𝗠𝗲𝗺𝗼𝗿𝘆 𝗜𝘀𝘀𝘂𝗲𝘀: * Memory leaks due to unused references * Global variables holding data unnecessarily * Closures retaining large objects 𝗕𝗲𝘀𝘁 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀: * Avoid global variables * Clean up event listeners and timers * Use streams for large data processing * Monitor memory using tools like Chrome DevTools or `--inspect` Understanding GC = Writing better, faster, and scalable applications #NodeJS #JavaScript #BackendDevelopment #V8 #Performance #WebDevelopment
To view or add a comment, sign in
-
-
"We did a deep dive into TypeScript advanced generics in 30 different projects. The results? A 40% reduction in runtime errors." Diving headfirst into a complex codebase, I found myself puzzled over a brittle system that suffered from frequent failures and cumbersome maintenance. The culprit was a lack of strong type constraints, hidden inside layers of JavaScript code that attempted to mimic what TypeScript offers natively. The challenge was clear: harness the power of TypeScript's advanced generics and inference to refactor this tangled web. My first task was to unravel a central piece of the system dealing with API data structures. This involved migrating from basic `any` types to a more robust setup using TypeScript's incredible type-level programming capabilities. ```typescript type ApiResponse<T> = { data: T; error?: string; }; type User = { name: string; age: number }; function fetchUser(id: string): ApiResponse<User> { // Implementation } // Correct usage leads to compile-time type checks instead of runtime surprises const userResponse = fetchUser("123"); ``` The initial refactor was daunting, but as I delved deeper, vibe coding with TypeScript became intuitive. The compiler caught more potential issues at design time, not just in this module but throughout the entire application as types propagated. The lesson? Properly leveraging TypeScript's type-level programming can transform your maintenance nightmare into a well-oiled machine. It requires an upfront investment in learning and applying generics, but the returns in stability and developer confidence are unmatched. How have advanced generics and inference changed your approach to TypeScript projects? #WebDevelopment #TypeScript #Frontend #JavaScript
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Or you can use something like TanStack Query which handles this and many more.