I just spent 2 hours debugging a useInvoice hook that worked perfectly until it didn't. The abstraction looked beautiful — wrapped useQuery with perfect types, clean interface, reusable across components. Classic developer move: see duplicate code, extract it, feel smart. Then production happened. Invoice data wasn't updating after mutations. The abstraction hid the queryKey structure, so invalidation broke silently. TypeScript was happy. Users were not. The 'elegant' custom hook was actually fighting TanStack Query's design. Query needed direct access to keys for cache management, but my abstraction buried that control three layers deep. Stripped it back to raw useQuery calls. More verbose? Sure. But now cache invalidation works, debugging is straightforward, and the query devtools actually make sense. Sometimes the best abstraction is no abstraction. Official docs: https://lnkd.in/ghQB9agy What abstraction have you built that looked genius until it met production? #react #typescript #webdev #programming #debugging
Debugging a flawed abstraction in useInvoice hook
More Relevant Posts
-
𝗜 𝘀𝗽𝗲𝗻𝘁 𝟰 𝗵𝗼𝘂𝗿𝘀 𝗱𝗲𝗯𝘂𝗴𝗴𝗶𝗻𝗴 𝗮 "𝗴𝗵𝗼𝘀𝘁 𝗨𝗜" — 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 𝘁𝗵𝗮𝘁 𝗿𝗲𝗳𝘂𝘀𝗲𝗱 𝘁𝗼 𝘂𝗽𝗱𝗮𝘁𝗲 𝗼𝗻𝗹𝘆 𝘁𝗼 𝗿𝗲𝗮𝗹𝗶𝘀𝗲 𝗜 𝗰𝗮𝘂𝘀𝗲𝗱 𝗶𝘁 𝗺𝘆𝘀𝗲𝗹𝗳. 1. We had a real-time dashboard in Angular showing live sensor data. Product was happy. QA was happy. Then staging broke. 2. 𝗥𝗼𝘄𝘀 𝘄𝗲𝗿𝗲𝗻'𝘁 𝗿𝗲-𝗿𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴 𝗮𝗳𝘁𝗲𝗿 𝗱𝗮𝘁𝗮 𝘂𝗽𝗱𝗮𝘁𝗲𝘀. The array was clearly mutating in the console. The template just… didn't care. 3. I'd switched the component to ChangeDetectionStrategy.OnPush for "performance" — without understanding what that actually means. 𝗧𝗵𝗲 𝗺𝗶𝘀𝘁𝗮𝗸𝗲: ❌ Mutating the array directly — OnPush never sees this this.sensors.push(newSensor); this.sensors[0].value = 99; With OnPush, Angular only checks a component when its input reference changes — not when you mutate the object inside it. 𝗧𝗵𝗲 𝗳𝗶𝘅: ✅ Return a new reference — Angular picks it up instantly this.sensors = [...this.sensors, newSensor]; this.sensors = this.sensors.map((s, i) => i === 0 ? { ...s, value: 99 } : s ); Or — if you're already using signals in Angular 17+ — just use a signal() and skip this mental overhead entirely. 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝗹𝗲𝘀𝘀𝗼𝗻: OnPush is not a magic performance button. It's a contract you promise Angular you'll treat state as immutable. Break the contract, pay in ghost UIs. 𝗔𝗱𝗼𝗽𝘁 𝗶𝘁. 𝗟𝗼𝘃𝗲 𝗶𝘁. 𝗕𝘂𝘁 𝘂𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 𝗶𝘁 𝗳𝗶𝗿𝘀𝘁. 𝗠𝘆 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝘁𝗼 𝘆𝗼𝘂 → Have you ever turned on OnPush and immediately broken something? What was your "wait, THAT was the issue?" moment? Drop it below. 👇 #Angular #TypeScript #WebDevelopment #FrontendDevelopment #SoftwareEngineering #LearnInPublic #CodingMistakes
To view or add a comment, sign in
-
🚀 Why Offset Pagination Fails at Scale (and What to Use Instead) When I started building backend services with Spring Boot, I used Pageable everywhere. It worked fine… until it didn’t. Once your table hits millions of rows, this becomes dangerous: LIMIT 20 OFFSET 1000000 👉 The database still scans & skips 1M rows before returning data. 👉 Result: slow queries, high DB load, poor user experience. 💡 The Better Approach: Window-Based Iteration (Cursor Pagination) Instead of OFFSET, use a cursor (lastId / timestamp): SELECT * FROM post_comment WHERE post_id = :postId AND id < :lastId ORDER BY id DESC LIMIT 20; 🔥 Why This Works ✅ Constant performance → O(limit), not O(offset) ✅ Uses indexes efficiently ✅ No lag even with 10M+ rows ✅ Perfect for infinite scroll APIs 🧠 Real Insight Think of it like this: • OFFSET → flipping every page to reach page 1000 📖 • Cursor → using a bookmark 🔖 ⚙️ How I Implement It • Use batchSize to control data window • Maintain lastId as cursor • Fetch next chunk using id < lastId • Repeat until no data 🚀 Where This Is Used • Social feeds (Instagram, LinkedIn, Twitter) • Comment systems • Chat applications • Large-scale analytics pipelines 🎯 Takeaway 👉 OFFSET pagination is simple but not scalable 👉 Cursor + window-based iteration is production-grade #Backend #SpringBoot #SystemDesign #Scalability #Java #SoftwareEngineering
To view or add a comment, sign in
-
Most C# devs use IEnumerable<T> every day — but few truly understand what's happening under the hood. Here's what you need to know 👇 ⚡ It's lazy by default The query doesn't run until you iterate. Chain .Where(), .Select(), .Take() — nothing executes until a foreach or .ToList() forces it. 🔗 It's the foundation of LINQ Every LINQ method returns an IEnumerable<T>. This is why chaining feels so natural — you're just composing pipelines. 📦 IEnumerable vs IList → Use IEnumerable for read-only, streaming, or unknown-size data → Use IList when you need indexing, .Count, or mutation ⚠️ Watch out: Multiple Enumeration Iterating an IEnumerable twice runs the query twice. If your source is a DB call or file read — that's a bug. Call .ToList() to cache. yield return is your superpower for writing custom iterators without loading everything into memory. Small distinction. Big performance impact. 💡 Which of these tripped you up early in your .NET journey? Drop it below 👇 #CSharp #dotNET #LINQ #SoftwareEngineering #BackendDevelopment #CodeQuality
To view or add a comment, sign in
-
-
Your JSON.parse is silently killing performance in high-frequency code paths. It's time to stop treating it as the default cloning solution. When you call JSON.parse(JSON.stringify(obj)) in a hot loop, you're paying for full serialization, string allocation, and re-parsing on every tick. That cost compounds fast. Structured cloning via structuredClone() is a native alternative that skips the string intermediary entirely: const original = { user: { id: 1, roles: ["admin"] } }; const clone = structuredClone(original); No JSON overhead. No custom recursive logic. Just a clean deep clone with broader type support - including Date, Map, Set, and ArrayBuffer. In benchmarks across V8-based environments, structuredClone consistently outperforms the JSON roundtrip approach, especially on nested or typed objects. The gap widens as object complexity increases. Practical takeaway - audit your hot paths today. If you see JSON.parse(JSON.stringify(...)) in event handlers, render loops, or WebSocket message processors, replace it with structuredClone and measure the difference immediately. Are you still defaulting to JSON roundtrips for deep cloning, or have you already made the switch? #JavaScript #WebDevelopment #Performance #V8 #FrontendEngineering #NodeJS
To view or add a comment, sign in
-
𝗛𝗼𝘄 𝗚𝗮𝗿𝗯𝗮𝗴𝗲 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻 𝗪𝗼𝗿𝗸𝘀 𝗶𝗻 𝗡𝗼𝗱𝗲.𝗷𝘀 As developers, we often focus on writing efficient code—but what about memory management behind the scenes? In 𝗡𝗼𝗱𝗲.𝗷𝘀, garbage collection (GC) is handled automatically by the 𝗩𝟴 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 𝗲𝗻𝗴𝗶𝗻𝗲, so you don’t need to manually free memory like in languages such as C or C++. But understanding how it works can help you write more optimized and scalable applications. 𝗞𝗲𝘆 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝘀: 𝟭. 𝗠𝗲𝗺𝗼𝗿𝘆 𝗔𝗹𝗹𝗼𝗰𝗮𝘁𝗶𝗼𝗻 Whenever you create variables, objects, or functions, memory is allocated in two main areas: Stack→ Stores primitive values and references Heap→ Stores objects and complex data 𝟮. 𝗚𝗮𝗿𝗯𝗮𝗴𝗲 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻 (𝗠𝗮𝗿𝗸-𝗮𝗻𝗱-𝗦𝘄𝗲𝗲𝗽) V8 uses a technique called Mark-and-Sweep: * It starts from “root” objects (global scope) * Marks all reachable objects * Unreachable objects are considered garbage * Then, it sweeps (removes) them from memory 𝟯. 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗚𝗮𝗿𝗯𝗮𝗴𝗲 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻 Not all objects live the same lifespan: Young Generation (New Space) → Short-lived objects Old Generation (Old Space) → Long-lived objects Objects that survive multiple GC cycles get promoted to the Old Generation. 𝟰. 𝗠𝗶𝗻𝗼𝗿 & 𝗠𝗮𝗷𝗼𝗿 𝗚𝗖 Minor GC (Scavenge)→ Fast cleanup of short-lived objects Major GC (Mark-Sweep / Mark-Compact) → Handles long-lived objects but is more expensive 𝟱. 𝗦𝘁𝗼𝗽-𝘁𝗵𝗲-𝗪𝗼𝗿𝗹𝗱 During GC, execution pauses briefly. Modern V8 minimizes this with optimizations like incremental and concurrent GC. 𝗖𝗼𝗺𝗺𝗼𝗻 𝗠𝗲𝗺𝗼𝗿𝘆 𝗜𝘀𝘀𝘂𝗲𝘀: * Memory leaks due to unused references * Global variables holding data unnecessarily * Closures retaining large objects 𝗕𝗲𝘀𝘁 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝘀: * Avoid global variables * Clean up event listeners and timers * Use streams for large data processing * Monitor memory using tools like Chrome DevTools or `--inspect` Understanding GC = Writing better, faster, and scalable applications #NodeJS #JavaScript #BackendDevelopment #V8 #Performance #WebDevelopment
To view or add a comment, sign in
-
-
Difference between fundamental data structures used in JavaScript: - If you need to access items by index, you should probably be using an Array. - If you need to access items by key, you should probably be using an Object. - If you need to access items by value, you should probably be using a Map. - If you need to store unique items and perform operations on that collection, you should probably be using a Set. #javascript #concepts #developer #coding #engineer
To view or add a comment, sign in
-
🚀 From Boilerplate Hell to Vibe Coding in .NET I removed 60% of my code… and the application actually got better. Sounds crazy? It’s not. Most .NET projects slowly turn into this: ✔ Controllers calling services ✔ Services calling repositories ✔ DTOs everywhere ✔ AutoMapper doing magic ✔ Layers on layers… on layers And suddenly: ❌ Hard to read ❌ Hard to debug ❌ Hard to change ⚠️ The real problem? We confuse structure with clarity. More code doesn’t mean better architecture. So what changed? 👇 I started removing things. ✔ Removed unnecessary abstractions ✔ Stopped creating interfaces “just in case” ✔ Reduced mapping layers ✔ Focused on feature-level flow 💡 What I use now: • Minimal APIs → clean endpoints • MediatR → clear request/handler logic • FluentValidation → simple validation • EF Core / Dapper → direct data access Flow becomes: Request → Validation → Handler → Data → Response That’s it. 🔥 The result? • Less code to maintain • Faster development • Easier debugging • Clearer business logic • Better developer experience 💣 Hard truth: If your architecture needs a diagram to explain… it’s already too complicated. 👉 Write less, but write better 👉 Build for clarity, not for impressing others Because… Good code runs. Great code flows. Have you escaped boilerplate hell yet? 👇 Let’s discuss #dotnet #softwarearchitecture #backenddevelopment #cleanarchitecture #developerexperience #webapi #programming #coding
To view or add a comment, sign in
-
-
TypeScript's utility types are a powerful feature that can enhance the robustness and maintainability of your code. One common challenge developers face is managing complex object types, particularly when dealing with APIs or intricate data structures. Utility types like Partial, Pick, and Record can simplify these issues immensely. For instance, using Partial allows you to create a type with all properties of a given type set to optional, which is particularly useful for form submissions where not all fields may be required. Consider this scenario: you have a User type with several properties. If you want to create an update function that allows users to change their data without needing to provide all fields, using Partial<User> can alleviate this concern. Here's a quick example: ```typescript type User = { name: string; email: string; age: number; }; function updateUser(id: number, userData: Partial<User>) { // Update user logic here } ``` While the benefits of using utility types are clear, they do come with considerations. Overuse can lead to ambiguity in your type definitions, making the code harder to read and understand. Balancing clarity with the flexibility that utility types offer is crucial. Ensuring that your type definitions remain intuitive aids in maintaining code quality over time. Pros: Enhances code reusability and reduces boilerplate. Cons: Possible confusion in type definitions if used excessively. #TypeScript #UtilityTypes #CodeQuality #AdvancedTypescript
To view or add a comment, sign in
-
Anthropic shipped a source map in the Claude Code npm package again. 60MB. 1,906 TypeScript files. The full CLI source code. This already happened in February, they pulled it, and here we are again… I downloaded it and I've been reading through it. The query loop lives in query.ts. It's a while(true) that calls the API, receives streaming blocks, and if a tool_use comes back it executes the tool and calls again. The thing is, stop_reason === 'tool_use' isn't reliable (there's a comment on line 554: "unreliable -- it's not always set correctly"), so they use the blocks themselves as the loop exit signal. That's the whole agent. Everything else is layers on top. The opusplan thing I could never fully figure out is in model.ts. Literally: if the setting is 'opusplan' and you're in plan mode, use Opus. Otherwise, Sonnet. If you set 'haiku' in plan mode, it bumps you to Sonnet automatically. And if the conversation goes past 200K tokens, it drops the Opus override and falls back to Sonnet. Mystery solved. But the thing that really got me is prompts.ts. Internal prompts are different depending on whether you're an employee or a user. There are entire blocks wrapped in process.env.USER_TYPE === 'ant'. Employees get instructions to avoid over-commenting code, to verify things actually work before reporting them as done, and to push back if the user has a misconception. External users get "Be extra concise. Go straight to the point." The auto-compact system has a circuit breaker after 3 consecutive failures (MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3 in autoCompact.ts). They reserve output tokens based on the p99.99 of summary length: 17,387 tokens. Bash commands cut off at 30K characters of output and auto-background after 15 seconds. bashSecurity.ts alone is over 100K lines. There's a feature flag called KAIROS that turns Claude Code into an autonomous agent. It receives <tick> prompts as a heartbeat, adjusts its autonomy based on whether your terminal is focused or not, and commits without asking. The actual prompt says: "Act on your best judgment rather than asking for confirmation." The next model is codenamed Numbat. There's a comment that says "Remove this section when we launch numbat", and the undercover mode protects opus 4.7 and sonnet 4.8 from leaking into commits. I'm currently building an AI automation and reading how Anthropic structures their own agents internally beats any official documentation. The repo is on GitHub (sanbuphy/claude-code-source-code) if you want to dig in yourself.
To view or add a comment, sign in
-
-
Angular Signals with HttpClient – Fetch API Data Using Signals Angular Signals introduce a powerful reactive way to manage state in Angular applications. When combined with HttpClient, they provide a clean and efficient way to fetch and update API data. In this tutorial, you will learn: • How Angular Signals work with HttpClient • Fetching API data using Signals • Building reactive UI updates • Practical example for real-world Angular applications If you're learning modern Angular patterns, this article will help you understand how Signals simplify data flow and state management. 📘 Read the full article: https://lnkd.in/gaYsvwTN #Angular #AngularSignals #FrontendDevelopment #WebDevelopment #SoftwareDevelopment #Programming
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development