A React component re-renders when state or props change. It also re-renders when its parent re-renders. That second one is where most performance bugs hide. On a trading dashboard updating 10 times per second, every wasted render is a dropped frame. Users notice. The app feels sluggish. And the fix isn't "rewrite it" — it's understanding three hooks and when to actually reach for them. The memoization toolkit in React is useMemo, useCallback, and React.memo. Most devs know what they do. Fewer know when not to use them. USEMEMO — CACHE AN EXPENSIVE COMPUTATION const processedData = useMemo(() => rawTrades.filter(t => t.volume > 1000).sort(...), [rawTrades] // only recomputes when rawTrades changes ) Use this when the derivation is genuinely expensive — sorting, filtering, or aggregating large datasets. Don't use it to memoize a string concatenation. The cache itself has a cost. USECALLBACK — STABLE FUNCTION REFERENCE FOR CHILD PROPS const handleSelect = useCallback((id: string) => { setSelected(id) }, []) // same reference across renders = child won't re-render Without this, a new function is created every render. If that function is passed as a prop to a memoized child, it breaks the memo — the child sees a "changed" prop and re-renders anyway. REACT.MEMO — SKIP RENDER IF PROPS HAVEN'T CHANGED const TradeRow = memo(({ trade }: { trade: Trade }) => { return <tr>{trade.symbol}</tr> }) Only works when the props are stable. If you're passing a new object or function reference on every render, memo does nothing — you need useMemo and useCallback upstream first. Here's the answer pattern that lands well in interviews: "I'd profile first with React DevTools to confirm which components are actually re-rendering unnecessarily. Then I'd stabilise callback references with useCallback, memoize expensive derivations with useMemo, and wrap pure display components with React.memo. Memoization is a last resort — not a default — because it adds overhead. The profiler tells you where that overhead is justified." The word "profile" is doing a lot of work in that answer. It signals that you don't guess at performance problems — you measure them. That's what separates the senior answer from the junior one. The trap interviewers set: "So should I just wrap everything in memo?" The correct answer is no — and knowing why not is the whole point. What's the worst unnecessary re-render bug you've debugged? Drop it below 👇 #React #JavaScript #FrontendDevelopment #WebPerformance #SoftwareEngineering
Emmanuel Allan’s Post
More Relevant Posts
-
A few months back, I shared shimmer-from-structure, a library that auto-generates loading skeletons by measuring your rendered components at runtime. It has gotten real usage plus 1000+ stars on Github Here's how it works under the hood. 🔍 THE PROBLEM Every component gets written twice: once for real content, once for the skeleton. Change the layout? Update both. They inevitably drift apart. But the structure you're recreating already exists in the DOM. The browser has already calculated every dimension and position. ✨ THE SOLUTION shimmer-from-structure renders your component once with mock data, measures it using getBoundingClientRect(), and generates pixel-perfect shimmer overlays automatically: <Shimmer loading={isLoading} templateProps={{ user: mockUser }}> <UserCard user={user ?? mockUser} /> </Shimmer> When loading={true}: renders with mock data, walks the DOM measuring each element, creates positioned overlays, makes content transparent. When loading={false}: shimmer disappears, real content shows. No layout shift, no drift, no maintenance. ⚡ THE PERFORMANCE CHALLENGE At 60fps, we have ~16.67ms per frame. The killer is reflows (layout recalculations). Reading getBoundingClientRect() forces the browser to recalculate layout. Interleaving DOM writes and reads causes layout thrashing. 🎯 THREE-PHASE BATCHING We batch all DOM operations: 1. Write Phase: Apply CSS changes without reading layout 2. Read Phase: Measure all elements in one pass (one reflow) 3. Render Phase: Generate overlays Only one reflow per cycle, regardless of complexity. Even with hundreds of elements, measurement completes in 2-5ms. 🏗️ FRAMEWORK-AGNOSTIC Supports React, Vue, Svelte, Angular, and SolidJS through a monorepo: • @shimmer-from-structure/core: Framework-agnostic utilities • Framework adapters: Thin lifecycle wrappers All optimization logic lives in core. Bug fixes benefit all frameworks automatically. 📱 RESPONSIVE Auto-updates on resize using ResizeObserver, throttled with requestAnimationFrame to 60fps. You can read the full deep dive here https://lnkd.in/dwuGtebE 🚀 TRY IT npm install @shimmer-from-structure/react 📚 https://lnkd.in/dnr3VTN8 🔧 https://lnkd.in/dXedjTa3 What loading state challenges are you facing? I'd love to hear your thoughts! #WebDevelopment #JavaScript #React #Vue #Svelte #Performance #OpenSource #FrontendDevelopment
To view or add a comment, sign in
-
I used to think React was just "smart about re-renders." Then I actually dug into how reconciliation works. And I realized I had been writing components that quietly fought the algorithm for years. Here's what's actually happening under the hood — and why it matters for your app's performance. — When state changes, React doesn't touch the real DOM immediately. It builds a new Virtual DOM tree, compares it to the previous one, and figures out the minimum set of changes needed. That comparison process is reconciliation. The naive version of this problem is O(n³). For 1000 elements — that's a billion comparisons. Completely unusable. So React cheats. In a smart way. It uses a heuristic O(n) algorithm built on two assumptions: → Elements of different types always produce different trees → Keys tell React which elements are stable across renders Simple rules. Massive performance gain. — Here's where it gets interesting — and where most bugs actually come from. Rule 1 in practice: If you swap a <div> for a <section>, React tears down the entire subtree and rebuilds from scratch. Every child component unmounts. All local state is lost. This isn't a bug. It's the algorithm doing exactly what you told it to do. I've seen this cause subtle bugs in forms — a wrapper element change during a conditional render, and suddenly input state resets mid-user interaction. — Rule 2 in practice — the key trap: Using array index as a key is one of the most common mistakes I've reviewed in code. If you have a list and insert an item in the middle, every index below it shifts. React sees completely new keys, throws away the existing nodes, and rebuilds them. What looked like a simple insert becomes a full list re-render. Use stable, unique IDs. Always. — Fiber changed everything in React 16. Before Fiber, reconciliation was synchronous — once started, it couldn't be interrupted. A heavy render could block the main thread and freeze the UI. Fiber broke rendering into small units of work. React can now pause, prioritize, and resume rendering. That's what powers Concurrent Mode, Suspense, and transitions. The algorithm didn't change. The scheduler around it did. — Practical things I now do differently because of this: → Never create component definitions inside render — new reference = React thinks it's a different type = full unmount every render → Keys on lists always come from data, never from index → Wrap stable subtrees in React.memo when the parent re-renders frequently → Use the Profiler in DevTools to actually see which reconciliation decisions are expensive — Reconciliation is one of those things that's easy to ignore until performance starts hurting. But once you understand the two rules React operates on, a lot of "React behaves weirdly" moments suddenly make complete sense. What's the most unexpected reconciliation bug you've run into? #react #frontend #javascript #webdev #reactjs #frontenddevelopment #softwaredevelopment
To view or add a comment, sign in
-
-
Blazor doesn't use a virtual DOM. This is a distinction that matters more than most people realize. React and Vue maintain a tree of JavaScript objects that mirrors the real DOM — a "virtual DOM" — and then reconcile differences between the virtual copy and the actual browser DOM. Blazor takes a fundamentally different approach. It builds a flat array of RenderTreeFrame structs. Not classes. Structs. Value types sitting in a contiguous block of memory. RenderTreeFrame[] is the core data structure of Blazor's entire rendering pipeline. Each frame is a lightweight instruction: open this element, set this attribute, render this text, mount this child component. Your complex C# component hierarchy gets reduced to this linear sequence during BuildRenderTree(). The RenderTreeDiffBuilder then walks two of these arrays — previous and current — to compute the minimal set of changes. Why structs instead of an object graph? Because this was designed from day one with WebAssembly constraints in mind. Heap allocations are expensive in WASM. GC pressure kills performance in a browser sandbox. A flat array of value types avoids both. The array even manages its own capacity, typically doubling when it runs out of room, so resizing doesn't thrash during a render cycle. The diff output gets packaged into a RenderBatch by the RenderBatchBuilder, which is then shipped across whatever boundary applies — JS Interop for WASM, SignalR for Server. The Renderer doesn't care which. It just produces batches. This is also why sequence numbers in BuildRenderTree matter so much. The diffing algorithm relies on them to align frames between the old and new arrays. Get them wrong (which can happen if you generate them dynamically instead of letting the Razor compiler assign them) and the diff produces nonsensical patches. Honestly, this part of the codebase in Microsoft.AspNetCore.Components.RenderTree is some of the most thoughtful systems-level engineering in the entire .NET ecosystem. It's the kind of design decision that's invisible when it works — which is always — but reveals real architectural intention when you look under the hood. Building Blazor Developer Tools (https://lnkd.in/gSnuMhnj) gave me a front-row seat to this. Being able to inspect the render tree frames directly shows you exactly how your components decompose into these flat instruction arrays before diffing even begins. #blazor #dotnet #microsoft
To view or add a comment, sign in
-
-
🔍 React 19.2: 3 anti-patterns now visible like never before You asked for a follow-up. Here it is. In the last post, I explained how DevTools in React 19.2 show the real reason behind every re-render. Today, I'm showing you 3 anti-patterns you used to ignore. But not anymore. Let's go. 1️⃣ Anonymous objects in props ❌ How many write: <div style={{ margin: 10, padding: 20 }}> ✅ How to fix: const boxStyle = { margin: 10, padding: 20 }; <div style={boxStyle}/> Why? A new object is created every time. React thinks: "Oh, props changed! Need to re-render!" In DevTools you'll see: "Style prop changed: new object created" 2️⃣ useEffect with no dependencies ❌ How many write: useEffect(() => { setCount(count + 1); }); ✅ How to fix: useEffect(() => { setCount(prev => prev + 1); }, []); Why? Without the dependency array, the effect runs after EVERY render. Hello, infinite loop. In DevTools you'll see: "Infinite render loop detected" 3️⃣ Custom hook returning a new function ❌ How many write: function useToggle() { const [value, setValue] = useState(false); return { value, toggle: () => setValue(v => !v) }; } ✅ How to fix: function useToggle() { const [value, setValue] = useState(false); const toggle = useCallback(() => setValue(v => !v), []); return { value, toggle }; } Why? Without useCallback, a new function is created on every call. The consumer component re-renders for no reason. In DevTools you'll see: "Toggle function changed: new reference" 📊 Summary 🔹 Anti-pattern #1 — Anonymous objects in props → Solution: extract object to a constant 🔹 Anti-pattern #2 — useEffect with no dependency array → Solution: add empty array [] 🔹 Anti-pattern #3 — New function on every render → Solution: wrap with useCallback 💬 Question for you Which of these three anti-patterns do you see most often in your codebase? Mine was #1. What about you? Drop your anti-patterns in the comments. #React19 #ReactJS #CleanCode #CodeReview #JavaScript #WebDev #ReactHooks #DevTools
To view or add a comment, sign in
-
React.memo — The Most Misunderstood Performance Optimization in React Most developers think `React.memo` makes a component faster. It does not. `React.memo` does not make your component render faster. It stops React from rendering a component when nothing has changed. And that is a huge difference. Imagine this: function Parent() { const [count, setCount] = useState(0); return ( <> <button onClick={() => setCount(count + 1)}> Increment </button> <Child /> </> ); } function Child() { console.log("Child rendered"); return <h1>I never change</h1>; } Every time you click the button: * Parent re-renders * Child re-renders * Console logs again Even though `Child` never changed. That is wasted work. Now use `React.memo`: const Child = React.memo(function Child() { console.log("Child rendered"); return <h1>I never change</h1>; }); Now when the parent re-renders, React compares the previous props and new props of `Child`. If the props are the same, React skips rendering the component. That means: * Less rendering * Less work for React * Better performance in large apps What actually happens internally? `React.memo` does a shallow comparison. That means React checks: prevProps === nextProps For primitive values like: name="Durgesh" age={22} it usually works perfectly. But for objects and functions: <Child user={{ name: "Durgesh" }} /> This will STILL re-render. Because every render creates a new object reference. Same problem with functions: <Child onClick={() => console.log("Hello")} /> That function is recreated every render. So `React.memo` thinks the props changed. Which means your optimization fails. This will NOT work const Child = React.memo(({ user }) => { console.log("Rendered"); return <h1>{user.name}</h1>; }); <Child user={{ name: "Durgesh" }} /> Because `user` is a new object every render. This WILL work const user = useMemo(() => ({ name: "Durgesh" }), []); <Child user={user} /> Now the object reference stays the same. So `React.memo` can skip the render. When should you use React.memo? Use it when: * The component renders often * The component is expensive to render * The props rarely change * You have large lists or complex UI When should you NOT use it? Do NOT wrap every component with `React.memo`. Because `React.memo` also has a cost. React now has to compare props every render. For tiny components, that comparison may cost more than simply rendering. The biggest mistake developers make is this: They use `React.memo` everywhere without checking if the component was actually re-rendering unnecessarily. First use React DevTools Profiler. Measure. Then optimize. Because performance optimization is not about adding more code. It is about stopping useless work. And that is exactly what `React.memo` does. #react #reactjs #javascript #frontend #webdevelopment #performance
To view or add a comment, sign in
-
-
Angular performance is not just about code. Network + assets + rendering all matter. Here’s what actually makes a difference: --- → Image format matters Photos / complex images ✔ WebP → ~25–35% smaller than JPEG ✔ AVIF → even smaller (best compression) --- Icons / UI graphics ✔ SVG (vector) → Resolution independent → Very small size → Scales perfectly ❌ Don’t use SVG for photos --- → Compression (Images) ❌ Raw uploads ✔ Use: → Squoosh → TinyPNG Even modern formats need compression. --- → Angular image optimization <img ngSrc="image.webp" width="400" height="300"> ✔ Lazy loading ✔ Optimized loading strategy ✔ Better Core Web Vitals --- → Priority for critical images <img ngSrc="hero.webp" priority> Improves LCP. --- → Responsive images <img srcset="small.webp 480w, large.webp 1200w"> Serve only what’s needed. --- → API / Network compression ❌ Sending raw JSON responses ✔ Enable compression: → Gzip (standard) → Brotli (better compression) --- Example: JSON response → 500KB After gzip → ~100–150KB Huge difference in load time. --- → Enable in server Node/Express: app.use(compression()); Nginx: gzip on; gzip_types application/json; --- → Caching + CDN ✔ Cache static assets ✔ Use CDN for faster delivery --- → Resize assets ❌ 2000px image for 300px UI ✔ Match actual display size --- Performance impact → Faster API response time → Faster image load → Better LCP → Reduced bandwidth → Improved UX --- Quick rule → Images → WebP / AVIF → Icons → SVG → APIs → gzip / Brotli → Angular → NgOptimizedImage --- Key takeaway: Most performance gains are outside your components. Optimize what travels over the network. Are you using Brotli or still on gzip?
To view or add a comment, sign in
-
> **Stop guessing where your files go. Here's the React folder structure every developer needs to know. 🗂️** > > After years of messy codebases, late-night debugging sessions, and onboarding nightmares — the secret to a scalable frontend isn't just your code. It's **how you organize it.** > > Here's what each folder does and why it matters: > 📡 **API** — All your backend connections in one place. No more hunting for fetch calls. > 🎨 **Assets** — Static files, images, fonts. Clean and centralized. > 🧩 **Components** — Reusable UI building blocks. Write once, use everywhere. > 🌐 **Context** — Global state without prop drilling chaos. > 📦 **Data** — Static JSON content and constants. > 🪝 **Hooks** — Custom logic extracted and reusable across the entire app. > 📄 **Pages** — One folder per route. Clean, readable, scalable. > 🔄 **Redux** — Advanced state management for complex apps. > ⚙️ **Services** — Business logic and frontend services, separated from UI. > 🛠️ **Utils** — Helper functions that every file in your app will thank you for. > > A well-structured project isn't a luxury — **it's what separates junior from senior developers.** > > Save this. Share it with your team. Your future self will thank you. 💾 > > --- > 💬 What does YOUR folder structure look like? Drop it in the comments 👇 --- `#ReactJS` `#FrontendDevelopment` `#WebDevelopment` `#JavaScript` `#CleanCode` `#SoftwareEngineering` `#Programming` `#React` `#CodeNewbie` `#100DaysOfCode` `#FolderStructure` `#TechTips` `#DeveloperLife` `#SoftwareDeveloper` `#LearnToCode` `#OpenSource` `#CodingTips` `#FullStackDeveloper` `#FrontendEngineer` `#UIUXDevelopment` --- **Why this will go viral:** - Opens with a **pain point** every developer feels - Uses **emojis** for scanability on mobile - Ends with a **call to action** (comment + share) that boosts LinkedIn's algorithm - Mix of **broad** (`#WebDevelopment`) and **niche** (`#FolderStructure`) hashtags for maximum reach
To view or add a comment, sign in
-
-
Who really owns your form data? In a standard HTML input, the DOM is the boss. It holds the value in its own internal memory, and you only "ask" for it when the user hits submit. But in React, we don't like hidden state. We want every piece of data to be explicit and predictable. This is where Controlled Components come in. In this pattern, the React state is the single source of truth. The input doesn't maintain its own value. Instead, you tell the input exactly what to display using the 'value' prop, and you update that value through an 'onChange' handler that modifies the state. The input is "controlled" because its behavior is entirely driven by the React component. Why go through this extra boilerplate? It gives you total coordination over the UI. Since the data lives in your state, you can perform instant field validation, disable the submit button based on specific criteria, or even format the user's input in real-time. There is no "syncing" issue between the DOM and your logic because they are never out of alignment. Of course, controlling every single character stroke in a massive form can feel like overkill. For simple, high-performance scenarios where you just need the data at the end, Uncontrolled Components using 'refs' might be faster. But for most applications, the predictability of a controlled flow far outweighs the cost of a few extra lines of code. It ensures that what the user sees is exactly what your application "knows". #ReactJS #SoftwareEngineering #WebDevelopment #FrontendArchitecture #CodingTips #Javascript
To view or add a comment, sign in
-
Built a Finance Dashboard using React 💸 It helps track income and expenses in a simple, clean way while visualizing financial insights. 🔧 Tech Stack: React (Hooks), Tailwind CSS, Recharts, React Icons, LocalStorage ⚙️ Key Features: • Add, edit, delete transactions • Search & filter by category/type • Admin & Viewer role system • Dark mode 🌙 • Data persistence using localStorage (no data loss on refresh) • Summary cards (balance, income, expenses, top category) • Pie chart for spending insights • Line chart for monthly balance trend • Fully mobile responsive 💡 React Concepts Used: • useState for managing UI & transaction state • useEffect for syncing data with localStorage • Conditional rendering (modals, roles, dark mode) • Derived state (filtered data, totals, charts) • Component-driven UI thinking 📊 Libraries Used: • Recharts → for Pie & Line charts • React Icons → for UI icons • Tailwind CSS → for fast and responsive styling 🚧 Challenges I Faced: Managing and transforming raw transaction data into meaningful insights (monthly trends & category breakdown) was tricky. 👉 Solved it by creating custom data maps (like monthlyDataMap & categoryMap) and then converting them into chart-friendly formats. Also handled edge cases like editing transactions, maintaining unique IDs, and keeping UI state consistent. 🎯 Tried to keep the UI minimal, fast, and user-friendly. Here’s the live demo 👇 https://lnkd.in/gpHQay68 Would love your feedback 🙌 #reactjs #frontend #webdevelopment #javascript #tailwindcss #projects
To view or add a comment, sign in
-
Most dependency graphs are unreadable. You’ve probably seen them. A giant web of nodes and edges that looks impressive… But tells you almost nothing. The problem is not the data. It is the interface. Developers don’t think in graphs. They think in structure. Folders. Files. Hierarchies. So I tried something different. I added a browser view to Depsly that lets you explore dependencies like a file tree. You can: Traverse parent → child relationships See transitive dependencies clearly Understand structure without visual overload Same data. Completely different experience. This is now part of Depsly v0.1.8. If you want to try it: pip install depsly depsly analyze Would love feedback from people who’ve struggled with dependency graphs. #opensource #devtools #javascript #nodejs #softwareengineering #ux #programming #webdev
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Nice post! It’s really tempting to use techniques we’ve learned to improve our code, especially if we haven’t taken the time to fully understand their pros and cons.