Next.js 16: Are you ready for the Async Request API and proxy.ts? 🛠️ Next.js 16 (2026) isn't just a performance bump—it’s a syntax shift. If you are moving from v14 or v15, your routing logic needs an update to stay compatible with the React Compiler and Turbopack. Here are the three most critical code patterns you need to know: 1️⃣ The New Async Request API In Next.js 16, params and searchParams are now Promises. You can no longer access them synchronously. This allows the framework to prioritize rendering static parts of your page while the dynamic data resolves. directory] // app/blog/[slug]/page.tsx export default async function Page({ params }: { params: Promise<{ slug: string }> }) { // ✅ Correct: You MUST await params const { slug } = await params; return <h1>Reading: {slug}</h1>; } 2️⃣ Goodbye middleware.ts, Hello proxy.ts Middleware has been evolved into proxy.ts. It lives at the edge and handles "Traffic Control." It is strictly for routing logic—not for heavy data processing. // src/proxy.ts import { NextResponse } from 'next/server'; import type { NextRequest } from 'next/request'; export async function proxy(req: NextRequest) { const token = req.cookies.get('session'); // Simple Redirect Logic if (!token && req.nextUrl.pathname.startsWith('/admin')) { return NextResponse.redirect(new URL('/login', req.url)); } return NextResponse.next(); } export const config = { matcher: ['/admin/:path*'], }; 3️⃣ The "use cache" Directive Next.js 16 stabilizes Partial Pre-rendering. You can now mark specific functions or components to be cached independently of the rest of the route. // components/PriceDisplay.tsx export default async function PriceDisplay() { "use cache"; // 🚀 This specific component is now cached at the edge const price = await getLatestPrice(); return <span>{price}</span>; } 💡 Strategy Cheat Sheet: Use proxy.ts for Auth guards, A/B testing, and Geolocation rewrites. Use Parallel Routes (@slot) when you need independent loading states for a dashboard (e.g., a Sidebar and a Main Feed). Use Intercepting Routes ((..)) for "Modals-as-Routes"—allowing users to share a URL that opens a specific item in a modal. The Bottom Line: Next.js 16 is pushing us toward a highly granular, asynchronous architecture. By awaiting your params and using the proxy.ts correctly, you're ensuring your app is ready for the React Compiler's aggressive optimizations. Are you finding the transition to Async APIs smooth, or is it breaking your existing utility functions? Let's debug in the comments! 👇 #NextJS16 #WebDev #ReactJS #CodingTips #FullStack #Vercel #JavaScript #SoftwareArchitecture
Next.js 16: Async Request API & proxy.ts Updates
More Relevant Posts
-
𝗥𝗲𝗮𝗰𝘁 𝟭𝟵 𝗰𝗵𝗮𝗻𝗴𝗲𝗱 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴. ⚛️ If you haven't upgraded yet, here's what you're missing. The biggest React update since React 18. ━━━━━━━━━━ 𝗥𝗲𝗮𝗰𝘁 𝗖𝗼𝗺𝗽𝗶𝗹𝗲𝗿 — 𝗡𝗼 𝗺𝗼𝗿𝗲 𝘂𝘀𝗲𝗠𝗲𝗺𝗼 🎯 The compiler is production-ready. It automatically optimizes your code. Before: const handleClick = useCallback(() => { console.log(user.name); }, [user]); After: function handleClick() { console.log(user.name); } The compiler stabilizes functions automatically. You write normal code. React optimizes it. ━━━━━━━━━━ 𝘂𝘀𝗲() — 𝗔𝘄𝗮𝗶𝘁 𝗣𝗿𝗼𝗺𝗶𝘀𝗲𝘀 𝗶𝗻 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 🔄 import { use } from "react"; function UserProfile() { const user = use(fetchUser()); return <h1>{user.name}</h1>; } No useState. No useEffect. Just await the Promise. React suspends until ready. Wrap in <Suspense> for loading state. ━━━━━━━━━━ 𝗦𝗲𝗿𝘃𝗲𝗿 𝗔𝗰𝘁𝗶𝗼𝗻𝘀 — 𝗡𝗼 𝗔𝗣𝗜 𝗿𝗼𝘂𝘁𝗲𝘀 🚀 "use server"; async function savePost(formData) { await db.post.create({ title: formData.get("title") }); } <form action={savePost}> <input name="title" /> <button>Save</button> </form> Backend code runs directly from UI. No fetch. No API route. Just works. ━━━━━━━━━━ 𝗡𝗲𝘄 𝗙𝗼𝗿𝗺 𝗛𝗼𝗼𝗸𝘀 📝 𝘂𝘀𝗲𝗙𝗼𝗿𝗺𝗦𝘁𝗮𝘁𝘂𝘀 — Check if form is submitting function SubmitButton() { const { pending } = useFormStatus(); return <button disabled={pending}> {pending ? "Saving..." : "Save"} </button>; } 𝘂𝘀𝗲𝗢𝗽𝘁𝗶𝗺𝗶𝘀𝘁𝗶𝗰 — Instant UI updates const [todos, addTodo] = useOptimistic( serverTodos, (current, newTodo) => [...current, newTodo] ); UI updates immediately. Real server action runs in background. ━━━━━━━━━━ 𝗦𝗲𝗿𝘃𝗲𝗿 𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀 — 𝗦𝘁𝗮𝗯𝗹𝗲 🏗️ Default: Server Component Opt-in: "use client" for interactivity Why? → Less JavaScript to client → 38% faster initial load → Better SEO → Direct database access Server Components handle data. Client Components handle UI. ━━━━━━━━━━ 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁 𝗠𝗲𝘁𝗮𝗱𝗮𝘁𝗮 — 𝗕𝘂𝗶𝗹𝘁-𝗶𝗻 📄 function BlogPost() { return ( <> <title>My Post</title> <meta name="description" content="..." /> <link rel="canonical" href="..." /> <h1>Content</h1> </> ); } No more react-helmet. Works with Server Components. ━━━━━━━━━━ 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝗥𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴 — 𝗗𝗲𝗳𝗮𝘂𝗹𝘁 ⚡ React pauses long renders to handle input. UI stays responsive under heavy load. Automatic batching expanded: → Promises → setTimeout → Native events 32% fewer re-renders in complex apps. ━━━━━━━━━━ 𝗪𝗵𝗮𝘁 𝘁𝗵𝗶𝘀 𝗺𝗲𝗮𝗻𝘀 💡 React is now async-first. Server-side by default. Performance is automatic. The framework does the optimization. You focus on features. 📌 This is post [1/6] in my Frontend 2026 series. Next: Next.js 15 — what changed. Have you upgraded to React 19 yet? 👇 #react #javascript #webdev #frontend #react19 #programming #webdevelopment #coding #reactjs #typescript
To view or add a comment, sign in
-
-
Mastering useActionState and useOptimistic for instant React UI Updates useActionState gives us reliable async state handling and useOptimistic gives us instant UI updates. In this post, we will build a LIKE button and prevent the backward jump issue with batched updates using useActionState and useOptimistic. For the full implementation, watch the video. 🟦 React Form Action We use a React form and the action prop to trigger the function. <form className='mt-6' action={queueLikeAction}> <button type="submit" .../> </form> With form action, we usually don’t need startTransition for useActionState. In our case though, we also trigger the action flow from useEffect, so we still use startTransition there. 🟦 queueLikeAction() This function applies the optimistic update and queues the LIKE for batched server sync. In this function, we use addOptimisticLike, the updater function returned by useOptimistic: const [optimisticState, addOptimisticLike] = useOptimistic( likeState, (currentState, likeIncrement: number) => ({ ... }), ); It also calls flushBatch(), which sends queued LIKEs in controlled batches. We’ll look at that function later. 🟦 useEffect This useEffect runs when isPending changes which means a server request status has changed. useEffect(() => { if (!isPending && processingRef.current) { ... flushBatch(); } }, [flushBatch, isPending]); Its core responsibility is to finalise the finished batch, resets state, and triggers flushBatch() for any remaining queued likes. 🟦 flushBatch() and useOptimistic This is the core batching function that sends queued LIKEs in one request while keeping instant optimistic UI. In this function, we call runLikeAction, the dispatcher from useActionState: const flushBatch = useCallback(() => { ... startTransition(() => { runLikeAction({ ... }); }); }, [runLikeAction]); Here we use useCallback to keep function identity stable but with React Compiler, manual memoisation is often not needed so we can remove useCallback. 🟦 useActionState useActionState gives us confirmed state, an action dispatcher, and pending status. const [likeState, runLikeAction, isPending] = useActionState( likeAction, initialState, ); It uses likeAction() function to process each batch update and return the next confirmed state. 🟦 likeAction() This function sends the batched LIKE update to the server, handles abort/error safely, and returns the next confirmed state. async function likeAction(previousState: LikeState, payload: LikePayload) { if (payload.type === 'LIKE_BATCH') { ... } return previousState; } If you want your UI to feel instant without weird count jumps while syncing with the server, this pattern could give you a great starting point. #frontend #react #optimistic
To view or add a comment, sign in
-
🚀 New Tool for Developers: JSON Compare Ever struggled to find differences between two JSON files? Try this: https://lnkd.in/ggfgzern ✔ Side-by-side comparison ✔ Highlights changes instantly ✔ Works with large JSON payloads ✔ 100% free & browser-based Perfect for debugging APIs and validating responses. #json #developers #webdev #api #programming
To view or add a comment, sign in
-
it’s one thing to read the docs, but it's another to watch your p99 latencies explode in production because a frontend dev decided to "explore the graph" from my deep experience at the raw engineering level, graphql usually becomes a liability the second you scale past a simple todo app. here is the low-level reality of why it sucks even more lol the hidden "alloc" nightmare 1 . in high-performance systems (think 144fps or high-throughput backends), allocations are the enemy. graphql is an allocation machine. every single field in your query requires a resolver execution in runtimes like node or python, this means creating thousands of tiny promise objects and function contexts 2 . i’ve seen profiles where 30-40% of the cpu time isn't even spent fetching data it's just the engine walking the ast and managing the overhead of the execution tree. it’s the definition of "slop" it feels like the dataloader is a band-aid on a broken leg people say "just use dataloader for n+1 but its kinda different from reality lol > dataloader waits for a tick of the event loop to batch requests. this adds artificial latency to every request it’s a high-level fix for a problem that wouldn't exist if you just used a disciplined sql join im a react fan boy as many people here im not react biased or something most of my takes at ( https://lnkd.in/gsqT4_Q3 ) at so low-level talking from the first principles from the root of the web tbh let me more brief why rsc wins lets look more how it mogs graphql lol graphqlserver parses string -> server resolves fields -> server serializes to json -> client parses json -> client reconciles ui. if you wanna learn how rsc works under the hood the protocol that maked rsc even possible is flight ngl in that lol flight protocol server executes component -> server streams already-serialized ui chunks the client doesn't need a heavy apollo/urql cache it just takes the stream and feeds it into the fiber reconciler. you bypass the entire "json parsing" and "cache normalization" bottleneck that makes graphql apps feel heavy and slow the reason i hate graphql not because its bad i kinda feel the org like facebook built it and force many engineers more then a decade to force engineers to learn that thing isnt universal at all its build for there products not for universal deterministic problem things have changed a lot ofc react server components are from the meta (@react ) but graphql isnt the universal solution my goodness its organisational tool named as better tool better performance tools something like that if you are the guy who want peak performance peak engineering better writing in software experience and removing the bloat go fucking back to roots tightly defined rpc's and binary serialisation (protobuf , flatbuffers ) and direct mechanical sympathy with your data layer lol
To view or add a comment, sign in
-
𝗥𝗲𝗮𝗰𝘁.𝗷𝘀 𝗣𝗮𝗿𝘁 𝟰 (𝟮𝟬𝟮𝟲): 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗛𝗼𝗼𝗸𝘀 𝗗𝗲𝗲𝗽 𝗗𝗶𝘃𝗲 🔥 Hi everyone! 👋 In Part 3, we covered the core hooks: useState, useEffect, and useRef. Today, let’s master the 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗛𝗼𝗼𝗸𝘀 that separate good React devs from great ones 1) useReducer (Complex State Logic) 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀: Manages state with a reducer function, like useState on steroids. Think of it like a traffic controller, actions go in, state updates are predictable. Key points: • Best for multiple related state values • Logic lives outside the component (easier testing) • `dispatch(action)` instead of `setState(value)` • Pairs well with useContext for global state 📌 Examples: Shopping cart, large forms, multi-step flows, undo/redo. 𝘂𝘀𝗲𝗦𝘁𝗮𝘁𝗲 𝘃𝘀 𝘂𝘀𝗲𝗥𝗲𝗱𝘂𝗰𝗲𝗿 • useState → simple values • useReducer → complex transitions 2) useMemo (Expensive Computation Caching) 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀: Memoizes a computed value, recalculates only when dependencies change. Like a calculator with memory, don’t redo work unnecessarily. Use it for: 1. Expensive computations (filtering/sorting big lists) 2. Stable derived values 3. Preventing unnecessary recalculations Don’t overuse: • It has overhead • Profile first, optimize second 📌 Examples: Filtering 10k items, computing totals, chart data prep. 3) useCallback (Stable Function References) 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀: Returns a memoized function reference between renders. Why it matters: New function every render = child re-render. Use it when passing callbacks to memoized components. 𝘂𝘀𝗲𝗠𝗲𝗺𝗼 𝘃𝘀 𝘂𝘀𝗲𝗖𝗮𝗹𝗹𝗯𝗮𝗰𝗸: • useMemo → caches a value • useCallback → caches a function 📌 Examples: onClick handlers, API functions passed as props, debounced handlers. 4) useContext (Global State Without Prop Drilling) Let's components access shared data without passing props through every level. Think Wi-Fi connects without cables. 3-step pattern: 1. createContext() 2. Provider 3. useContext() Common uses: • Theme • Auth state • Language • Feature flags Tip: Combine with useReducer for a lightweight global store. 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗧𝗶𝗽𝘀 • React.memo — skip re-renders if props don’t change • useCallback + React.memo — stable props • useMemo — skip heavy recalculations • React.lazy() + Suspense — code splitting • Use stable, unique keys in lists • 𝗚𝗼𝗹𝗱𝗲𝗻 𝗿𝘂𝗹𝗲: 𝗠𝗲𝗮𝘀𝘂𝗿𝗲 𝗯𝗲𝗳𝗼𝗿𝗲 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗶𝗻𝗴. • 𝗥𝗲𝗮𝗰𝘁 𝗗𝗲𝘃𝗧𝗼𝗼𝗹𝘀 𝗣𝗿𝗼𝗳𝗶𝗹𝗲𝗿 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗯𝗲𝘀𝘁 𝗳𝗿𝗶𝗲𝗻𝗱. 𝙽̲𝚎̲𝚡̲𝚝̲ ̲𝙿̲𝚘̲𝚜̲𝚝̲:̲ ̲𝙲̲𝚞̲𝚜̲𝚝̲𝚘̲𝚖̲ ̲𝙷̲𝚘̲𝚘̲𝚔̲𝚜̲,̲ ̲𝙱̲𝚞̲𝚒̲𝚕̲𝚍̲𝚒̲𝚗̲𝚐̲ ̲𝚁̲𝚎̲𝚞̲𝚜̲𝚊̲𝚋̲𝚕̲𝚎̲ ̲𝙻̲𝚘̲𝚐̲𝚒̲𝚌̲ ̲(̲𝚞̲𝚜̲𝚎̲𝙵̲𝚎̲𝚝̲𝚌̲𝚑̲,̲ ̲𝚞̲𝚜̲𝚎̲𝙳̲𝚎̲𝚋̲𝚘̲𝚞̲𝚗̲𝚌̲𝚎̲,̲ ̲𝚞̲𝚜̲𝚎̲𝙻̲𝚘̲𝚌̲𝚊̲𝚕̲𝚂̲𝚝̲𝚘̲𝚛̲𝚊̲𝚐̲𝚎̲,̲ ̲𝚞̲𝚜̲𝚎̲𝙵̲𝚘̲𝚛̲𝚖̲)̲ #ReactJS #JavaScript #ReactHooks #FrontendDevelopment #LearnReact
To view or add a comment, sign in
-
-
Stephen Toub wrote a 230-page blog post about .NET 10 performance. -- Performance work behind .NET 10 is too good to ignore — 300+ pull requests (25% from community) touching the JIT, standard libraries, LINQ, regex, and more. 1. ZERO-ALLOCATION OBJECT CREATION .NET 10's JIT compiler now performs escape analysis. If an object never leaves its method — not stored, returned, or referenced elsewhere — the JIT stack-allocates it instead of putting it on the GC heap. A Stopwatch benchmark: 40 bytes allocated on .NET 9, zero on .NET 10. Delegates dropped from 19.5 ns to 6.7 ns. That's a 66% speedup you get by recompiling. No code changes. 2. COLLECTION ENUMERATION IS 5-12x FASTER This is the big one. Every .NET app enumerates collections. When you foreach through IEnumerable<T>, the compiler generates an enumerator with virtual dispatch and a try/finally block. Before .NET 10, the JIT couldn't inline try/finally. Now it can. Combined with escape analysis, the results are striking: - Array via IEnumerable: 500 ns (.NET FW) → 190 ns (.NET 9) → 40 ns (.NET 10) - ConcurrentDictionary: 1,600 ns → 900 ns → 140 ns - All .NET 10 results: zero bytes allocated Stack, Queue, List — same story across the board. 3. LINQ SKIPS WORK IT DOESN'T NEED TO DO OrderBy(...).Contains() on a million elements: - .NET 9: 83 milliseconds (full sort, then check) - .NET 10: 10-20 nanoseconds (skip the sort entirely) The runtime now passes information between LINQ operators. Contains doesn't need sorted input, so .NET 10 doesn't sort. Reverse().Contains() skips the 8 MB copy because order doesn't affect existence. Also new: built-in Shuffle, LeftJoin, and RightJoin. Left joins have been a missing LINQ method for over a decade. 4. REGEX PATTERNS THAT TOOK 24 ms NOW TAKE 40 ns Two improvements here. First, the engine converts more greedy loops to atomic loops. It now understands Unicode category overlap — so \w+ followed by a math symbol becomes atomic. Zero backtracking. Second, the engine lifts anchors out of lookaheads. The pattern (?=^)hello used to scan the full input. Now it only checks position zero. Against 3.5 MB of Mark Twain: 24 ms on .NET Framework, 2 ms on .NET 9, 40 nanoseconds on .NET 10. A 600,000x improvement. 5. SIMD-POWERED BIT OPERATIONS New CollectionsMarshal.AsBytes() exposes BitArray internals as Span<byte>. Pair it with TensorPrimitives and you get SIMD-accelerated operations where you used to iterate bit-by-bit. Hamming distance on 100 bits: 500 ns on Framework, 160 ns on .NET 9, 10 ns on .NET 10. Fifty times faster. These gains compound across releases. Try/finally inlining in .NET 10 only works because .NET 9 improved devirtualization. That only worked because .NET 8 improved guarded devirtualization. Each release builds on the last. Full breakdown in the post below 👇 https://lnkd.in/gPy_n6zS #dotnet #csharp #performance #softwareengineering
To view or add a comment, sign in
-
⚙️ Worker Threads vs Clustering in Node.js (When to Use What?) Node.js is powerful… But it runs on a single thread. So how do we handle: ❌ CPU-heavy tasks ❌ Multi-core usage ❌ High scalability Node.js gives us two powerful solutions: 👉 Clustering 👉 Worker Threads Let’s break it down 👇 ⚡ 1️⃣ Clustering (Multi-Process Scaling) Clustering allows you to create multiple Node.js processes. Each process runs on a separate CPU core. 🔁 How it works Master Process ⬇ Multiple Worker Processes ⬇ Each handles incoming requests ✅ Best for: ✔ Handling high traffic ✔ Scaling APIs ✔ Load balancing across cores 🧠 2️⃣ Worker Threads (Multi-Threading) Worker threads allow you to run CPU-heavy tasks in parallel threads. Instead of blocking the event loop, work is offloaded. 🔁 How it works Main Thread ⬇ Worker Thread ⬇ Executes heavy computation ✅ Best for: ✔ Image processing ✔ Data parsing ✔ CPU-intensive tasks ⚖️ Clustering vs Worker Threads Clustering ✔ Multi-process ✔ Handles requests ✔ Improves scalability Worker Threads ✔ Multi-threaded ✔ Handles heavy computation ✔ Prevents event loop blocking 📊 Real Insight Most production systems use both together: ✔ Clustering → handle traffic ✔ Worker Threads → handle heavy tasks 💡 Final Thought Scaling Node.js is not about one technique… It’s about choosing the right tool for the right problem. Have you used Worker Threads or Clustering in your projects? Which worked better for you? Let’s discuss 👨💻 #NodeJS #JavaScript #BackendDevelopment #SystemDesign #Scalability #WorkerThreads #Clustering #PerformanceOptimization #TechTips #SoftwareEngineering #Microservices #DevOps
To view or add a comment, sign in
-
-
POST 5 — Advanced Script Setup Patterns 💡 is more than syntactic sugar. It changes what's possible at compile time. Here are 6 advanced patterns that experienced Vue 3 devs use 👇 ───────────────────────── defineModel() — two-way binding without the boilerplate Before: you had to define a prop, emit an update event, and wire them together manually. Now: defineModel() does all of that in one line. It returns a ref you can read and write directly. Vue handles the emit automatically. ───────────────────────── defineExpose() — controlling your component's public API By default, nothing in is accessible from a parent via template ref. defineExpose() lets you explicitly publish specific methods or values. This is how you build components with imperative APIs — focus, scroll, reset — without breaking encapsulation. ───────────────────────── useTemplateRef() — the modern way to reference DOM elements The old way was declaring a ref with the same name as the template ref attribute. Implicit. Confusing. Easy to break. useTemplateRef('myInput') is explicit, typed, and works cleanly with TypeScript. ───────────────────────── Compiler macros don't need imports defineProps, defineEmits, defineModel, defineExpose, withDefaults — none of these need to be imported. They are compiler macros. They exist at compile time, not runtime. If you're importing them, your tooling setup is misconfigured. ───────────────────────── Generic components with TypeScript unlocks fully typed generic components. Build type-safe list components, select dropdowns, and data tables where the item type flows from parent to child automatically. No more casting to any. No more losing type information at the boundary. ───────────────────────── 6. Top-level await in setup() You can use await directly in <script setup> without any wrapping. The component automatically becomes an async component. Pair this with Suspense for clean data-fetching patterns without a single manual loading flag. ───────────────────────── The mindset shift: <script setup> isn't just less code. It's a different compilation model. Understanding what the compiler does with it unlocks patterns that feel impossible with Options API. ───────────────────────── Which of these did you not know about? Drop it in the comments 👇 #Vue3 #TypeScript #JavaScript #FrontendDevelopment #WebDev
To view or add a comment, sign in
-
🚀.NET C# : var vs Explicit Types The var keyword (Introduced in C# 3.0) allows for implicit typing, where the compiler determines the type based on the right-hand side of the assignment. The type is determined at compile time. var name="Nadeem"; var age=26; var balance=99.99; The compiler automatically understands: name → string age → int balance→ double So internally it becomes: string name = "Nadeem"; int age = 26; double balance = 99.99; ⚙️ The Technical Truth First, let's bust a myth: There is NO performance difference. Whether you write string name = "Nadeem"; or var name = "Nadeem"; the compiled IL (Intermediate Language) is identical. ❌You cannot declare a var without assigning it a value immediately. var user; // Compiler error user = "Nadeem"; Because var is not a dynamic type. The compiler needs to look at the right-hand side of the = sign at that exact moment to decide what the type is. 👉So the real question is: Is using var a good practice or a bad practice? ✅ When var is a GOOD Practice (Use it!) The general rule is: Use var when the type is obvious from the assignment. 👉When the type is obvious var users = new List<UserAccount>(); 👉Refactoring Resilience (Maintenance Perspective): You have a method GetUsers() that currently returns a List<string>. You use it in 10 different places. You decide to change the return type to HashSet<string> (for faster lookups). ● If you used var users=GetUsers(), you change the method once and you are done. ● If you used List<string> users =GetUsers(), you now have 10 compiler errors to fix manually. Explicit types require manual updates everywhere. 👉Anonymous Types: Must use var here var user = new { Id = 1, Role = "Admin" }; No explicit type exists for this. 👉To simplify complex generic types var groupedData = dictionary.GroupBy(x => x.Key); 👉Cleaner LINQ Queries When working with LINQ, types can become incredibly complex (e.g., nested groupings). var keeps these queries readable var users = dbContext.Users.Where(u => u.IsActive).ToList(); ⚠️ When var is a Bad Practice (When the type is unclear) var data=GetData(); What is data? List? String? Object? Not clear. Better: List<User> data = GetData(); instantly readable. 🎯 Best Practice Rule ✔ Use var when the type is obvious ✔ Avoid var when the type becomes obscure Clean code is about readability first. #Csharp #DotNet #CleanCode #SoftwareEngineering #ProgrammingTips #CodingStandards #Programming #BackendDeveloper #Coding #SoftwareDevelopment
To view or add a comment, sign in
-
-
Day 91 of me reading random and basic but important dev topicsss... Yesterday I read about how to capture File objects. Today, I read about how to actually look inside them.... Enter: The FileReader API. FileReader is a built-in object with one sole purpose: reading data from Blob (and File) objects asynchronously. Because reading from a disk can take time, it delivers the data using an event-driven model. Here is the complete breakdown of how to wield it..... The 3 Core Reading Methods: The method we choose depends entirely on what we plan to do with the data.... 1. readAsText(blob, [encoding]) - Perfect for parsing CSVs or text files into a string. 2. readAsDataURL(blob) - Reads the binary data and encodes it as a base64 Data URL. (Ideal for immediately previewing an uploaded <img> via its src attribute). 3. readAsArrayBuffer(blob) - Reads data into a binary ArrayBuffer for low-level byte manipulation. (Note: You can cancel any of these operations mid-flight by calling reader.abort()) The Event Lifecycle: As the file reads, FileReader emits several events. The most common are load (success) and error (failure), but we also have access to: * loadstart (started) * progress (firing continuously during the read) * loadend (finished, regardless of success/fail) let reader = new FileReader(); reader.readAsText(file); reader.onload = () => console.log("Success:", reader.result); reader.onerror = () => console.log("Error:", reader.error); The Fast-Track: If your only goal is to display an image or generate a download link, skip FileReader entirely! Use URL.createObjectURL(file). It generates a short, temporary URL instantly without needing to read the file contents into memory. Web Workers: Dealing with massive files? You can use FileReaderSync inside Web Workers. It reads files synchronously (returning the result directly without events) without freezing the main UI thread! Keep Learning!!!!! #JavaScript #WebAPI #FrontendDev #WebArchitecture #Coding
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development