🛡️ 𝗡𝗲𝘅𝘁.𝗷𝘀 𝟭𝟲: 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆, 𝗟𝗼𝗴𝗶𝗰 & 𝗠𝘂𝗹𝘁𝗶-𝗧𝗲𝗻𝗮𝗻𝗰𝘆 𝟭. 𝗥𝗼𝘂𝘁𝗲 𝗛𝗮𝗻𝗱𝗹𝗲𝗿𝘀 (𝗿𝗼𝘂𝘁𝗲.𝘁𝘀) 𝘃𝘀. 𝗽𝗿𝗼𝘅𝘆.𝘁𝘀 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻: When should logic reside in a proxy.ts versus a dedicated Route Handler? 𝗔𝗻𝘀𝘄𝗲𝗿: Think of proxy.ts as the Receptionist and route.ts as the Specialist. Use proxy.ts for cross-cutting concerns that happen before the request hits your app: Auth redirects, A/B testing rewrites, and Geolocation-based routing. Use Route Handlers for specific, publicly reachable API endpoints: Webhook listeners (Stripe), file generation (PDFs/CSVs), or acting as a "Backend for Frontend" (BFF) to aggregate data from multiple microservices. 𝟮. 𝗦𝗲𝗿𝘃𝗲𝗿 𝗔𝗰𝘁𝗶𝗼𝗻𝘀 𝘃𝘀. 𝗥𝗼𝘂𝘁𝗶𝗻𝗴 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻: How can a Server Action trigger a redirect while maintaining client-side state? 𝗔𝗻𝘀𝘄𝗲𝗿: By using the redirect() function within the action. The Mechanism: In Next.js 16, redirect() triggers a Client-Side Navigation (Soft Nav). This means the browser doesn't do a full page reload; instead, the React state in your persistent layouts (like a navigation bar or sidebar) remains intact while the main content area switches. Pro-Tip: Use revalidatePath or revalidateTag before the redirect to ensure the user lands on a page with fresh data. 𝟯. 𝗘𝗱𝗴𝗲 𝗥𝘂𝗻𝘁𝗶𝗺𝗲 𝗖𝗼𝗻𝘀𝘁𝗿𝗮𝗶𝗻𝘁𝘀 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻: Which Node.js libraries will fail in proxy.ts, and how do you architect around it? 𝗔𝗻𝘀𝘄𝗲𝗿: Any library relying on native C++ modules or filesystem access (fs, path, crypto, os) will fail in the Edge Runtime used by proxy.ts. The Fix: Use Web-standard polyfills (e.g., jose instead of jsonwebtoken for JWTs) or move the heavy computation to a Route Handler configured with the nodejs runtime. 𝟰. 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗠𝗲𝘁𝗮𝗱𝗮𝘁𝗮 & 𝗦𝗘𝗢 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻: How do you generate dynamic OG tags in an async route without blocking the initial paint? 𝗔𝗻𝘀𝘄𝗲𝗿: Use generateMetadata. The Internal Logic: In Next.js 16, metadata is streamed. For users, the UI renders immediately while metadata is injected asynchronously into the <body>. Next.js 16 detects bots and waits for generateMetadata to resolve, ensuring a fully SEO-optimized <head>. 𝟱. 𝗠𝘂𝗹𝘁𝗶-𝘁𝗲𝗻𝗮𝗻𝘁 𝗦𝘂𝗯𝗱𝗼𝗺𝗮𝗶𝗻 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻: What is the most performant way to implement a multi-tenant strategy (e.g., tenant.app.com)? 𝗔𝗻𝘀𝘄𝗲𝗿: Middleware Rewriting. Step 1: In proxy.ts, extract the hostname and check it against a fast-access store (like Vercel KV or Redis). Step 2: Use NextResponse.rewrite() to map the subdomain to an internal dynamic route (e.g., acme.app.com rewritten to /tenants/acme). The Result: Users see clean subdomains while you use a single [tenant] directory to share logic and isolate data via slugs. Simple and organized. #NextJS16 #WebSecurity #MultiTenancy #ServerActions #FullStack #SoftwareArchitecture #Vercel #TechLead #TechInterview
Next.js 16: Routing, Security, and Multi-Tenancy Strategies
More Relevant Posts
-
Frontend API Caching — LLD Strategies for Scalable Apps ⚡ One thing I’ve realized while learning frontend system design: Fetching data is easy. Fetching data efficiently is engineering. If every user action triggers a new API request, your app will eventually suffer from: • Slow UI • Server overload • Poor user experience That’s where API caching strategies come in. Here are some key concepts I’m diving into 👇 🧠 1️⃣ React Query Architecture Instead of manually managing: • loading state • error state • caching • refetching Libraries like React Query create a server state layer. Mental model: Server → Source of truth React Query Cache → Fast local layer UI → Consumer This architecture removes a lot of manual state management. --- ⚡ 2️⃣ Stale-While-Revalidate One of the most powerful caching patterns. Flow: 1️⃣ Show cached data instantly 2️⃣ Fetch fresh data in background 3️⃣ Update UI when new data arrives User gets: ✔ Instant UI ✔ Fresh data ✔ No loading flicker Fast and accurate. --- 🔮 3️⃣ Prefetching Data Instead of waiting for users to navigate… You fetch data before they need it Example: User hovers on Product Page → Prefetch product API When they click → data is already cached. Result: ⚡ Instant navigation. Small optimization, big UX improvement. --- 🔄 4️⃣ Cache Invalidation (Hardest Problem) There’s a famous quote in engineering: > “There are only two hard things in Computer Science: > cache invalidation and naming things.” When data changes: • New product added • Cart updated • Profile edited Cache must refresh. Strategies: • Time-based invalidation • Mutation triggers • Query key refetching Good cache design keeps UI consistent with backend truth. --- 🧠 Mental Model Frontend data layer should behave like a mini database: • Queries = reads • Mutations = writes • Cache = read optimization When designed well: Your app feels instant. --- I’m starting to see frontend architecture less like UI development and more like data systems design in the browser. And caching plays a huge role in that. #SystemDesign #Frontend #Backend #MERNStack #WebDev #FullStack #Developer #Web #Developer #Performance #Rendering #Express #JavaScript #BackendDev #Node #Mongo #Database #ReactQuery #Caching #Cache
To view or add a comment, sign in
-
-
𝗥𝗲𝗮𝗰𝘁.𝗷𝘀 𝗣𝗮𝗿𝘁 𝟰 (𝟮𝟬𝟮𝟲): 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗛𝗼𝗼𝗸𝘀 𝗗𝗲𝗲𝗽 𝗗𝗶𝘃𝗲 🔥 Hi everyone! 👋 In Part 3, we covered the core hooks: useState, useEffect, and useRef. Today, let’s master the 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗛𝗼𝗼𝗸𝘀 that separate good React devs from great ones 1) useReducer (Complex State Logic) 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀: Manages state with a reducer function, like useState on steroids. Think of it like a traffic controller, actions go in, state updates are predictable. Key points: • Best for multiple related state values • Logic lives outside the component (easier testing) • `dispatch(action)` instead of `setState(value)` • Pairs well with useContext for global state 📌 Examples: Shopping cart, large forms, multi-step flows, undo/redo. 𝘂𝘀𝗲𝗦𝘁𝗮𝘁𝗲 𝘃𝘀 𝘂𝘀𝗲𝗥𝗲𝗱𝘂𝗰𝗲𝗿 • useState → simple values • useReducer → complex transitions 2) useMemo (Expensive Computation Caching) 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀: Memoizes a computed value, recalculates only when dependencies change. Like a calculator with memory, don’t redo work unnecessarily. Use it for: 1. Expensive computations (filtering/sorting big lists) 2. Stable derived values 3. Preventing unnecessary recalculations Don’t overuse: • It has overhead • Profile first, optimize second 📌 Examples: Filtering 10k items, computing totals, chart data prep. 3) useCallback (Stable Function References) 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀: Returns a memoized function reference between renders. Why it matters: New function every render = child re-render. Use it when passing callbacks to memoized components. 𝘂𝘀𝗲𝗠𝗲𝗺𝗼 𝘃𝘀 𝘂𝘀𝗲𝗖𝗮𝗹𝗹𝗯𝗮𝗰𝗸: • useMemo → caches a value • useCallback → caches a function 📌 Examples: onClick handlers, API functions passed as props, debounced handlers. 4) useContext (Global State Without Prop Drilling) Let's components access shared data without passing props through every level. Think Wi-Fi connects without cables. 3-step pattern: 1. createContext() 2. Provider 3. useContext() Common uses: • Theme • Auth state • Language • Feature flags Tip: Combine with useReducer for a lightweight global store. 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗧𝗶𝗽𝘀 • React.memo — skip re-renders if props don’t change • useCallback + React.memo — stable props • useMemo — skip heavy recalculations • React.lazy() + Suspense — code splitting • Use stable, unique keys in lists • 𝗚𝗼𝗹𝗱𝗲𝗻 𝗿𝘂𝗹𝗲: 𝗠𝗲𝗮𝘀𝘂𝗿𝗲 𝗯𝗲𝗳𝗼𝗿𝗲 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗶𝗻𝗴. • 𝗥𝗲𝗮𝗰𝘁 𝗗𝗲𝘃𝗧𝗼𝗼𝗹𝘀 𝗣𝗿𝗼𝗳𝗶𝗹𝗲𝗿 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗯𝗲𝘀𝘁 𝗳𝗿𝗶𝗲𝗻𝗱. 𝙽̲𝚎̲𝚡̲𝚝̲ ̲𝙿̲𝚘̲𝚜̲𝚝̲:̲ ̲𝙲̲𝚞̲𝚜̲𝚝̲𝚘̲𝚖̲ ̲𝙷̲𝚘̲𝚘̲𝚔̲𝚜̲,̲ ̲𝙱̲𝚞̲𝚒̲𝚕̲𝚍̲𝚒̲𝚗̲𝚐̲ ̲𝚁̲𝚎̲𝚞̲𝚜̲𝚊̲𝚋̲𝚕̲𝚎̲ ̲𝙻̲𝚘̲𝚐̲𝚒̲𝚌̲ ̲(̲𝚞̲𝚜̲𝚎̲𝙵̲𝚎̲𝚝̲𝚌̲𝚑̲,̲ ̲𝚞̲𝚜̲𝚎̲𝙳̲𝚎̲𝚋̲𝚘̲𝚞̲𝚗̲𝚌̲𝚎̲,̲ ̲𝚞̲𝚜̲𝚎̲𝙻̲𝚘̲𝚌̲𝚊̲𝚕̲𝚂̲𝚝̲𝚘̲𝚛̲𝚊̲𝚐̲𝚎̲,̲ ̲𝚞̲𝚜̲𝚎̲𝙵̲𝚘̲𝚛̲𝚖̲)̲ #ReactJS #JavaScript #ReactHooks #FrontendDevelopment #LearnReact
To view or add a comment, sign in
-
-
𝑵𝒐𝒅𝒆.𝒋𝒔 𝑭𝑨𝑸 𝐐. 𝐖𝐡𝐲 𝐢𝐬 𝐍𝐨𝐝𝐞.𝐣𝐬 𝐒𝐢𝐧𝐠𝐥𝐞-𝐭𝐡𝐫𝐞𝐚𝐝𝐞𝐝? Node.js uses a single thread to efficiently handle asynchronous processing. The operation of asynchronous tasks in a single thread achieves better performance and scalability than typical thread-based approaches under usual web loads. 𝐐. 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐭𝐡𝐞 𝐍𝐨𝐝𝐞.𝐣𝐬 𝐞𝐯𝐞𝐧𝐭 𝐥𝐨𝐨𝐩 ? The event loop is the core mechanism that enables Node.js to handle multiple tasks efficiently on a single thread. When you perform an operation like reading a file, Node.js doesn’t wait for the task to complete. Instead, it delegates the task to the operating system and moves on to handle other tasks in the queue. Once the task finishes, the event loop picks up the result and executes the associated callback function. This asynchronous, non-blocking approach is what makes Node.js highly scalable and efficient, especially for I/O-intensive tasks like serving multiple users or processing API requests. 𝐐. 𝐖𝐡𝐚𝐭 𝐚𝐫𝐞 𝐦𝐨𝐝𝐮𝐥𝐞𝐬 𝐢𝐧 𝐍𝐨𝐝𝐞.𝐣𝐬, 𝐚𝐧𝐝 𝐡𝐨𝐰 𝐝𝐨 𝐲𝐨𝐮 𝐮𝐬𝐞 𝐭𝐡𝐞𝐦 ? Modules in Node.js are reusable blocks of code that help organize functionality into smaller, manageable pieces. There are three types of modules: ⦿ Core Modules: Built into Node.js (e.g., fs, http, path) ⦿ Local Modules: Custom modules you create within your project ⦿ Third-Party Modules: Installed via npm (e.g., Express) Here’s an example of a local module. 𝘮𝘢𝘵𝘩.𝘫𝘴 function add(a, b) { return a + b; } module.exports = add; 𝘐𝘯 𝘢𝘱𝘱.𝘫𝘴 const add = require('./math'); console.log(add(2, 3)); // Output: 5 𝐐. 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐑𝐄𝐏𝐋 𝐢𝐧 𝐭𝐡𝐞 𝐜𝐨𝐧𝐭𝐞𝐱𝐭 𝐨𝐟 𝐍𝐨𝐝𝐞.𝐣𝐬 ? In the Node.js framework, REPL is an abbreviation for Read, Eval, Print, and Loop. It’s an environment where one can interact by entering commands within it, similar to a console or Linux terminal and which prints out the system’s responses to commands. The tasks performed by REPL are as follows: • Read: The input from the user is read and parsed into a JavaScript data structure and then stored in memory. • Eval: The data structure is evaluated. • Print: The outcome of the evaluation is printed. • Loop: Continues on, running commands till the user presses CTRL+C twice. 𝐐. 𝐖𝐡𝐚𝐭 𝐚𝐫𝐞 𝐭𝐡𝐞 𝐅𝐞𝐚𝐭𝐮𝐫𝐞𝐬 𝐨𝐟 𝐍𝐨𝐝𝐞.𝐣𝐬 ? • Asynchronous and non-blocking I/O operations • Event-driven architecture • Single-threaded event loop • Scalability and high concurrency • Efficient module system with npm (Node Package Manager) • Cross-platform compatibility #javascript #nodejs #backend #fullstack #development #interview #readytowork #opentowork #immediateJoiner
To view or add a comment, sign in
-
React Query changed how I think about frontend architecture. Before learning it deeply, my React apps looked like this: • `useEffect` everywhere • Manual loading states • Duplicate API calls • Complex global state The real problem? I was mixing server state with client state. Once I understood React Query architecture, everything became simpler. --- ## 🧠 Mental Model Think of React Query as a server state manager. Architecture: Server (API) ↓ React Query Cache ↓ React Components Your components never talk directly to the server. They talk to the cache layer. This small shift solves many problems automatically. --- ## ⚡ Query Flow When a component requests data: 1️⃣ React Query checks cache 2️⃣ If data exists → return instantly 3️⃣ If stale → re-fetch in background 4️⃣ Cache updates → UI re-renders This pattern is called: Stale-While-Revalidate Result: • Fast UI • Fresh data • Minimal API calls --- ## 🔄 Mutations (Writes) For updates like: • Add to cart • Update profile • Create order React Query uses mutations. Flow: User Action ↓ Mutation request ↓ Server update ↓ Invalidate related queries ↓ Refetch fresh data This keeps UI and backend in sync. --- ## 🚀 Prefetching Strategy One underrated feature is prefetching. Example: User opens product list. When they hover on a product → prefetch product details API. By the time they click → data already exists in cache. Navigation becomes instant. --- ## 🔥 Why This Matters When apps scale: Manual fetching leads to: ❌ API duplication ❌ inconsistent UI state ❌ difficult debugging React Query solves this by introducing a data architecture layer. Frontend starts behaving like a **distributed system client** instead of just UI. --- Now I’m designing frontend apps with: • Server state layer • Cache strategy • Query invalidation rules Instead of just writing fetch calls. --- 👉 Curious to know: Do you prefer **React Query** or **SWR** for server state management? #SystemDesign #Frontend #Backend #MERNStack #WebDev #FullStack #Developer #Web #Developer #Performance #Rendering #Express #JavaScript #BackendDev #Node #Mongo #Database #TanStack #Query #React
To view or add a comment, sign in
-
-
💡 Frontend Insight Many React projects start simple. But as the application grows, API calls often become scattered across multiple components. Over time this usually leads to: • duplicated request logic • inconsistent data fetching patterns • components that mix UI and networking logic One pattern that helps a lot in larger projects is introducing a centralized API layer. Tools like RTK Query make this much easier by handling caching, deduplication, and API state management. I wrote a short article explaining how this pattern works in practice. 🔗 https://lnkd.in/drDkh6xb Curious how other developers structure API layers in React applications. #reactjs #frontend #redux
To view or add a comment, sign in
-
𝗗𝗮𝘆 𝟰: 𝗨𝗻𝗱𝗲𝗿 𝘁𝗵𝗲 𝗛𝗼𝗼𝗱 — 𝗢𝗽𝗲𝗻𝗖𝗹𝗮𝘄 𝗶𝘀 𝗮𝗻 𝗢𝗿𝗰𝗵𝗲𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 𝗟𝗮𝘆𝗲𝗿, 𝗡𝗼𝘁 𝗮 𝗖𝗵𝗮𝘁𝗯𝗼𝘁 Hook: If you think OpenClaw is just a wrapper around the Claude API, you are fundamentally misunderstanding the tech stack of 2026. The Paradigm Shift: Modern web architecture (like your typical MERN stack) is highly capable, but it is ultimately reactive—it waits for an event or HTTP request to change state. OpenClaw (currently on stable v2026.3.2) flips this model. It is a persistent, autonomous orchestration layer. It operates as a centralized control plane running on a single Node.js (≥22) process. Here is the actual 4-layer architecture powering the "Jarvis" hype: 1. The Gateway (Control Plane) This is the nervous system. Running locally on port 18789, it handles WebSocket connections, enforces your openclaw.json security policies, and manages authentication. It doesn't "think"—it routes. 2. The Agent Runtime (Reasoning Loop) Built on the Pi agent framework, this is the core autonomous loop. It assembles context from session history and local Markdown files, invokes the LLM, and dispatches tool calls. Note on State: If your Gateway restarts, your agent doesn't get "amnesia." Transcripts are persisted to disk as .jsonl files. You only lose the active, in-flight context. 3. Tools & Skills (Execution) This is where the supply chain risk from Day 3 lives. Skills act as MCP (Model Context Protocol) servers to expose tools to the agent. Crucially, the Sandbox (Docker-based isolation) is completely separate from MCP and is strictly opt-in. The Agent Runtime dispatches the tool; the Sandbox contains it. 4. Channels & Nodes (Interface) This layer connects the Gateway to the outside world, supporting 20+ messaging adapters (Slack, WhatsApp) and native mobile apps. 🛡️ The IronPlate Architecture Tip: Because the Agent Runtime is model-agnostic, you can route complex logic to Claude 4.6 (using the new adaptive thinking default) and route data-sensitive tasks to a local Ollama model. The Catch: Local/smaller models are significantly weaker at resisting prompt injection. If you use Ollama, you must pair it with strict Docker sandboxing and minimal tool profiles. We want to hear from you 👇 When building agentic workflows, what is your current engine strategy? Drop a ☁️ if you rely purely on cloud APIs (Anthropic/OpenAI) for the reasoning loop. Drop a 💻 if you are hybrid-routing sensitive tasks to local models (Ollama/vLLM). Let’s debate in the comments. Follow IronPlate.ai — Day 5 is the big showdown: OpenClaw vs. Manus. Which one actually owns the autonomous desktop? #OpenClaw #SystemArchitecture #AgenticAI #IronPlateAI #SoftwareEngineering #NodeJS #DevOps2026
To view or add a comment, sign in
-
-
👻 𝗧𝗵𝗲 “𝗚𝗵𝗼𝘀𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗠𝗮𝗰𝗵𝗶𝗻𝗲” - 𝗔𝗻𝗴𝘂𝗹𝗮𝗿 𝗠𝗲𝗺𝗼𝗿𝘆... You closed the tab. You navigated to another page. But your Angular component 𝗺𝗶𝗴𝗵𝘁 𝘀𝘁𝗶𝗹𝗹 𝗯𝗲 𝗮𝗹𝗶𝘃𝗲. Running. Listening. Executing code. In the background. 👀 ------------------------------------------------------------------------ 𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗟𝗼𝗴𝗶𝗰 In Angular, when you call: 𝘰𝘣𝘴𝘦𝘳𝘷𝘢𝘣𝘭𝘦.𝘴𝘶𝘣𝘴𝘤𝘳𝘪𝘣𝘦() you open a 𝗹𝗶𝘃𝗲 𝗱𝗮𝘁𝗮 𝘀𝘁𝗿𝗲𝗮𝗺. Think of it like 𝗼𝗽𝗲𝗻𝗶𝗻𝗴 𝗮 𝘄𝗮𝘁𝗲𝗿 𝗽𝗶𝗽𝗲. If you don't close it using: 𝘶𝘯𝘴𝘶𝘣𝘴𝘤𝘳𝘪𝘣𝘦() the pipe 𝗻𝗲𝘃𝗲𝗿 𝘀𝘁𝗼𝗽𝘀 𝗳𝗹𝗼𝘄𝗶𝗻𝗴. Even after the component that opened it is 𝗱𝗲𝘀𝘁𝗿𝗼𝘆𝗲𝗱. ------------------------------------------------------------------------ 𝗧𝗵𝗲 𝗥𝗲𝗮𝗹 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 (𝗠𝗼𝘀𝘁 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀 𝗠𝗶𝘀𝘀 𝗧𝗵𝗶𝘀) The subscription still holds a 𝗿𝗲𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝘁𝗼 𝘁𝗵𝗲 𝗱𝗲𝘀𝘁𝗿𝗼𝘆𝗲𝗱 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁. Which means: • Component destroyed ❌ • Memory released ❌ • Observable still emitting ❌ Because the reference still exists, the 𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 𝗚𝗮𝗿𝗯𝗮𝗴𝗲 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗼𝗿 𝗰𝗮𝗻𝗻𝗼𝘁 𝗿𝗲𝗺𝗼𝘃𝗲 𝘁𝗵𝗲 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁 𝗳𝗿𝗼𝗺 𝗺𝗲𝗺𝗼𝗿𝘆. Your application slowly accumulates 𝗱𝗲𝗮𝗱 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀. Like ghosts haunting your app. 👻 ------------------------------------------------------------------------ 𝗜𝘁 𝗚𝗲𝘁𝘀 𝗪𝗼𝗿𝘀𝗲... Angular uses 𝗭𝗼𝗻𝗲.𝗷𝘀 to track async operations. Every time the Observable emits: 𝘖𝘣𝘴𝘦𝘳𝘷𝘢𝘣𝘭𝘦 → 𝘡𝘰𝘯𝘦.𝘫𝘴 → 𝘈𝘯𝘨𝘶𝘭𝘢𝘳 𝘊𝘩𝘢𝘯𝘨𝘦 𝘋𝘦𝘵𝘦𝘤𝘵𝘪𝘰𝘯 Even if the component is no longer on screen. Which means Angular keeps doing 𝘂𝗻𝗻𝗲𝗰𝗲𝘀𝘀𝗮𝗿𝘆 𝘄𝗼𝗿𝗸. CPU usage rises. Memory grows. Your laptop fan starts sounding like a 𝗷𝗲𝘁 𝗲𝗻𝗴𝗶𝗻𝗲. ✈️ ------------------------------------------------------------------------ 𝗧𝗵𝗲 𝗦𝗶𝗹𝗲𝗻𝘁 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗞𝗶𝗹𝗹𝗲𝗿 In large apps this leads to: • Gradually increasing RAM usage • Slower navigation between pages • Too many event listeners • Chrome devtools showing growing heap memory All caused by one 𝗳𝗼𝗿𝗴𝗼𝘁𝘁𝗲𝗻 𝘂𝗻𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲(). The Pro Developer Fix Modern Angular solutions: ✔ takeUntil() ✔ Async Pipe ✔ DestroyRef (Angular 16+) ✔ takeUntilDestroyed() ✔ Signals (Angular 16+ reactive model) ------------------------------------------------------------------------ 𝗧𝗵𝗲 𝗩𝗶𝗿𝗮𝗹 𝗧𝘄𝗶𝘀𝘁 Most Angular developers think performance problems come from 𝗹𝗮𝗿𝗴𝗲 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 𝗼𝗿 𝗵𝗲𝗮𝘃𝘆 𝗗𝗢𝗠 𝗿𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴. But sometimes the real problem is… 𝗔 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁 𝘁𝗵𝗮𝘁 𝗱𝗶𝗲𝗱 𝗹𝗼𝗻𝗴 𝗮𝗴𝗼 - 𝗯𝘂𝘁 𝗶𝘀 𝘀𝘁𝗶𝗹𝗹 𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝗰𝗼𝗱𝗲. 👻 #Angular #AngularDeveloper #RxJS #JavaScript #TypeScript #FrontendDevelopment #WebDevelopment #SoftwareEngineering #MemoryLeak #PerformanceOptimization #CleanCode #DeveloperCommunity
To view or add a comment, sign in
-
🚀 **Next.js Frontend Data Fetching: Mistakes vs Best Practices (Big Scale Projects)** Frontend e data fetching properly handle na korle project ta quickly messy hoye jay 😓 Especially large-scale app e — structure na thakle maintain kora impossible. Here’s a simple breakdown from real-world experience 👇 --- ❌ **Common Mistakes (Avoid These)** • Direct component e API call kora (useEffect diye) • No caching → same request bar bar 🔁 • UI & data logic mix kora • Loading & error state ignore kora • Over-fetching unnecessary data • Duplicate API logic across components --- ✅ **Best Practices (Follow These)** 🔹 **1. Use Service Layer for Data Fetching** API call gula ekta separate layer e rakho 👇 ✔ Clean code ✔ Reusable ✔ Easy maintenance 🔹 **2. Use TanStack React Query** Async state manage korar jonno MUST use koro 👇 ✔ Smart caching ✔ Auto refetch ✔ Loading & error handling ✔ Better performance 🚀 🔹 **3. Keep Components Clean (UI Only)** Component e sudhu UI thakbe — logic na --- 🏗️ **Recommended Frontend Flow** UI Component ↓ Custom Hook (useProducts) ↓ Service Layer (API functions) ↓ Backend API --- 💡 **Why This Matters?** 👉 Without structure: Code messy, slow & hard to scale 👉 With proper flow: ✔ Clean architecture ✔ Better performance ✔ Easy to scale ✔ Developer-friendly --- ⚠️ **What NOT to Do** 🚫 Component e directly fetch koro na 🚫 Business logic + UI mix koro na 🚫 Caching ignore koro na 🚫 Same API multiple jaygay likho na --- 🔥 **Pro Tip:** “Small project e shortcut cholbe, but big project e structure is everything.” --- #NextJS #ReactJS #FrontendDevelopment #WebDevelopment #CleanCode #SoftwareArchitecture #TanStackQuery #Performance #ScalableApps
To view or add a comment, sign in
-
-
Choosing the Right State Management in React Shouldn’t Be a Headache 🤔 One question I often see developers struggle with: “Should we use Redux, Zustand, Context API, or TanStack Query?” The reality is — there is no one-size-fits-all solution. The secret is matching the tool to the type of state you are managing. After working with React applications, I like to think of state in three categories 👇 1️⃣ Local UI State → useState If the state belongs to a single component or a small component tree, keep it simple. Examples Modals Form inputs Toggle switches UI visibility Best Tool ✔ useState ✔ useReducer (when logic becomes complex) 💡 Pro Tip: Avoid over-engineering. If it doesn’t need to be global, don’t make it global. 2️⃣ Shared UI State → Context / Zustand / Redux When multiple distant components need the same data, a shared store makes sense. Examples Authentication data Theme settings Shopping cart Global UI preferences Options • Context API – Great for small apps, built into React • Zustand / Jotai – Lightweight, minimal boilerplate, perfect for medium complexity • Redux Toolkit – Still the gold standard for very large apps with complex state logic and debugging needs 3️⃣ Server State → TanStack Query (React Query) One of the most common mistakes I still see: 🚫 Storing API responses in Redux Server state is a different beast. It needs caching, background updates, and synchronization. Examples API responses Paginated lists User profiles Infinite scrolling feeds Best Tool ✔ TanStack Query (React Query) ✔ SWR Benefits Automatic caching Background refetching Built-in loading & error states Easy pagination & infinite scrolling 💡 The “Golden Rule” for Most React Projects State Type Recommended Tool Component state useState Complex local logic useReducer Small shared state Context API Medium shared state Zustand / Jotai Large-scale apps Redux Toolkit API / Server data TanStack Query / SWR Final Thought Modern React architecture isn’t about choosing one library and using it everywhere. It’s about using the right tool for the right job. In many healthy React applications today, you’ll often see a combination of: useState Zustand / Context TanStack Query And that’s perfectly fine. 💬 Curious to hear from the community: What’s your go-to stack for state management in 2026? Are you Team Zustand, or still loyal to Redux? Let’s discuss below 👇 #React #Frontend #StateManagement #WebDevelopment #SoftwareArchitecture #Zustand #Redux #CodingTips
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development