Most Next.js developers are doing caching wrong. Here's what fetch() is actually doing behind the scenes — and how to control it. A thread on Next.js Cache one of the most powerful (and confusing) features in the App Router. THE DEFAULT BEHAVIOUR When you use fetch() in a Server Component, Next.js caches the response forever by default. This is called the Data Cache. // This is cached indefinitely const data = await fetch('https://lnkd.in/d7X_BG-a'); That means on every request, your users get a fast response. served from cache, not a live API call. OPT OUT OF CACHING But what if your data changes every second like a live price feed or breaking news? // No cache — always fresh data const data = await fetch('https://lnkd.in/drnwrKAk', { cache: 'no-store' }); REVALIDATE ON A SCHEDULE The sweet spot for most apps: revalidate every N seconds using ISR (Incremental Static Regeneration). Rebuild cache every 60 seconds const data = await fetch('https://lnkd.in/dNaDTpU6', { next: { revalidate: 60 } }); ON-DEMAND REVALIDATION Published a new blog post? Don't wait 60 seconds. Trigger a cache purge instantly with revalidatePath() or revalidateTag(). import { revalidatePath } from 'next/cache'; export async function publishPost() { // ...save to DB revalidatePath('/blog'); // 💥 purge cache instantly } Quick reference: → force-cache = cached forever → revalidate: N = ISR (rebuild every N seconds) → no-store = always fresh Understanding cache layers in Next.js is what separates junior devs from senior engineers. Save this. Your future self will thank you. What's the trickiest caching bug you've ever debugged? Drop it below #NextJS #WebDevelopment #JavaScript #React #Frontend #Caching #AppRouter #SoftwareEngineering #100DaysOfCode
Next.js caching: Default behavior, opt out, and revalidation
More Relevant Posts
-
🔥 Stop Making Repeated API Calls in React! (Use Smart Caching) As a Frontend Developer, one of the most common performance issues I’ve seen is: 👉 Same API getting called again and again This not only slows down the app ⏳ but also increases server load 📉 💡 The solution? Client-Side Caching using TanStack Query (React Query) --- ✅ What is TanStack Query? It’s a powerful data-fetching library that automatically: ✔️ Caches API responses ✔️ Prevents duplicate API calls ✔️ Refetches data in background ✔️ Manages loading & error states --- 💻 Example: Avoid Redundant API Calls import { useQuery } from "@tanstack/react-query"; const fetchUsers = async () => { const res = await fetch("/api/users"); return res.json(); }; export default function Users() { const { data, isLoading } = useQuery({ queryKey: ["users"], // unique cache key queryFn: fetchUsers, staleTime: 5 * 60 * 1000, // cache valid for 5 mins }); if (isLoading) return <p>Loading...</p>; return <div>{data.length} users</div>; } --- 🧠 How it works? - First API call → data stored in cache - Next time → data served from cache (no API call) 🚀 - After "staleTime" → background refetch happens --- ⚡ Why I prefer this in production? 👉 No need to manually manage cache 👉 Built-in request deduplication 👉 Improves performance drastically 👉 Cleaner & scalable code --- 🎯 Pro Tip (Senior Level) Use proper "queryKey" + cache invalidation strategy for dynamic apps --- 💬 Have you used React Query or still managing API calls manually? Let’s discuss 👇 #ReactJS #Frontend #Performance #WebDevelopment #JavaScript #ReactQuery #TanStackQuery #SoftwareEngineering
To view or add a comment, sign in
-
Last week I deployed what looked like a perfect product page. Then a client screenshot landed in my inbox… 👉 Prices from 3 days ago. The database had the correct data. The API returned the correct data. But the page? ❌ Completely frozen in time. 🚨 What was actually happening? Next.js App Router silently overrides the native fetch() API. By default, every request runs with: 👉 cache: 'force-cache' That means: Data is cached permanently Stored on disk And ignores your HTTP Cache-Control headers 🤯 The real complexity There isn’t just one cache layer — there are four: Request Memoization Data Cache Full Route Cache Router Cache 👉 Which makes debugging stale data extremely tricky 👉 Especially when everything works fine locally ✅ How to fix it properly ✔ Always define your caching strategy explicitly ✔ Use revalidate for controlled updates ✔ Call revalidatePath() or revalidateTag() after mutations ✔ Use cache: 'no-store' only for real-time or user-specific data ✔ Tag your fetches — it’s the most scalable approach 🔑 Key Takeaways Next.js fetch() defaults to permanent caching Dev mode does NOT reflect production caching behavior Stale data bugs usually appear after deployment Proper cache control = predictable apps Bookmark this. Your future self will thank you when your client sends another screenshot. 🔖 💬 What’s the worst caching bug you’ve faced in Next.js? #NextJS #WebDev #React #TypeScript #JavaScript #Frontend #FullStack #SoftwareDevelopment #Programming #TechTips
To view or add a comment, sign in
-
-
Most performance issues aren't fixed by rewriting your app. They're fixed by understanding where the bottleneck actually is. Here's the full-stack optimization map I follow: Frontend (React) → Code splitting with React.lazy + Suspense — don't load what users don't need yet → Memoization (useMemo, useCallback, React.memo) — stop unnecessary re-renders → List virtualization with react-window — render 10k+ rows without killing the browser → State architecture matters: Zustand/Jotai over bloated Context trees → Bundle size: tree shaking + Vite + dynamic imports → Images: WebP, lazy loading, blur placeholder Network & Caching Layer → React Query with proper staleTime/cacheTime — stop hammering your API on every focus → HTTP caching: Cache-Control, ETag, Last-Modified — let the browser do the work → CDN for static assets + Redis for API response caching → Cursor-based pagination > offset pagination (always) Backend (Node.js) → DB indexes + query plans — most slow queries are just missing an index → N+1 problems? DataLoader. Full stop. → Promise.all / allSettled for async parallelism — don't await sequentially → gzip/brotli compression + streaming for large responses → Rate limiting + cluster mode to use all CPU cores → Connection pooling (pg-pool, Mongoose poolSize) — DB connections are expensive The golden rule: Measure first. Optimize second. React DevTools Profiler → Lighthouse → Node.js --prof Then fix what the data tells you — not what you assume. Blind optimization is just expensive guessing. #WebPerformance #ReactJS #NodeJS #SoftwareEngineering #FullStack #BackendDevelopment #FrontendDevelopment
To view or add a comment, sign in
-
-
🚀 Next.js Caching — 5 Must-Know Points for Frontend Developers If you're working with Next.js (App Router), caching is no longer a backend-only concern. It directly impacts performance, scalability, and user experience. Here are 5 key points every frontend engineer should understand 👇 1️⃣ Server-first Data Fetching Next.js fetches data on the server by default, which means the HTML is generated before it reaches the browser. 👉 This reduces client-side JavaScript, improves initial load time, and ensures better SEO since search engines receive fully rendered content. 2️⃣ Multiple Caching Strategies (Control is in your hands) Next.js allows you to decide how fresh your data should be: Default caching → Data is cached and reused for performance revalidate (ISR) → Data is refreshed after a specific time interval no-store → Data is fetched on every request (no caching) 👉 Choosing the right strategy is about balancing performance vs freshness. 3️⃣ UI Cache vs Data Cache (Understand the difference) Data Cache → Only API/DB response is cached, UI still re-renders UI Cache → Entire rendered output (HTML) is cached 👉 UI caching gives maximum performance gains, but must be used carefully to avoid showing stale or incorrect data. 4️⃣ Tag-based Cache Invalidation (Real-world game changer) Using cacheTag and revalidateTag, you can invalidate cached data instantly when something changes. 👉 Example: When a product is updated in admin, you can refresh all related pages immediately instead of waiting for cache expiry. 👉 This enables event-driven updates, which is crucial in real-world apps. 5️⃣ Think in Terms of Data Volatility (Senior-level mindset) Instead of blindly choosing a strategy, classify your data: Static data (rarely changes) → aggressive caching Semi-dynamic data (updates occasionally) → ISR Real-time or user-specific data → no-store 👉 This approach helps design scalable and efficient systems#NextJS #Frontend #WebDevelopment #Performance #JavaScript #Nextjs #Reactjs
To view or add a comment, sign in
-
-
I used to wonder why my Node.js server slowed to a crawl under load. Then I truly understood the event loop. 💡 Here's what changed — with real examples. 🔷 THE CORE IDEA Node.js runs on a single thread. But it handles thousands of concurrent operations without breaking a sweat. The secret? It never waits. When Node.js hits an async operation — a DB call, an API request, a file read — it hands it off to the system and keeps moving. The event loop picks up the result when it's ready and executes the callback. This is why Node.js excels at I/O-heavy workloads. 🔷 WHERE IT WINS IN THE REAL WORLD ✅ High-concurrency APIs Handling 10,000 simultaneous requests? Node.js doesn't spin up 10,000 threads. It processes each callback as responses arrive — lean and efficient. ✅ Real-time applications Chat apps, live notifications, collaborative tools — all powered by the event loop's ability to handle thousands of WebSocket connections without dedicated threads per user. ✅ Streaming data Video streaming, large file transfers — Node.js streams chunks of data through the event loop continuously, keeping memory usage low. 🔷 WHERE IT BREAKS — AND HOW TO FIX IT ❌ CPU-intensive tasks on the main thread Running image compression, PDF generation, or complex calculations synchronously blocks the event loop. Every other request waits. → Fix: Use worker_threads or offload to a background job queue. ❌ Deeply nested synchronous loops A for loop processing 1 million records on the main thread starves the event loop. → Fix: Break work into async chunks or use streams. ❌ Misunderstanding setTimeout() setTimeout(fn, 0) doesn't mean immediate. It means "after the current stack clears and when the event loop gets to it." → Fix: Use setImmediate() or process.nextTick() when execution order matters. 🔷 THE GOLDEN RULE The event loop is the heartbeat of your Node.js server. Every millisecond you block it, every user feels it. Write async code. Offload heavy work. Understand your phases. That's how you build Node.js apps that scale. 🚀 What's the most unexpected event loop issue you've debugged? I'd love to hear it 👇 #NodeJS #JavaScript #BackendDevelopment #EventLoop #WebPerformance #SoftwareEngineering #FullStackDevelopment #TechTips #AsyncProgramming #NodeJSDeveloper
To view or add a comment, sign in
-
I made our Node.js app 30% faster using Worker Threads. No DB changes. No infra upgrades. Here's the full breakdown 👇 THE PROBLEM Our data pipeline read large files, transformed records, and wrote to MongoDB. Time taken: 8–10 minutes. Users were complaining. The instinct? Upgrade the server. The real problem? Node.js was doing everything on ONE thread. WHY NODE.JS GETS SLOW Node.js runs on a single thread — the Event Loop. Great for I/O. But CPU-heavy tasks? They BLOCK everything. This is why async/await doesn't help for CPU work — it only helps with waiting. THE FIX: WORKER THREADS Worker Threads let you run JavaScript in parallel on separate threads. The approach: → Split 50,000 records into 4 chunks → Each chunk runs in its own Worker → Use a Worker Pool to reuse threads (avoid spawning unlimited workers) → Merge results back in the main thread THE RESULTS Before: 8–10 minutes, Event Loop blocked, app unresponsive After: 2–3 minutes, Event Loop free, ~70% faster WHEN TO USE THEM ✅ Large data transformation ✅ Image/video processing ✅ Complex calculations (ML, encryption) ✅ File compression ❌ DB queries — use async/await ❌ HTTP requests — Event Loop handles these fine ❌ Simple loops — overhead isn't worth it The key insight: async/await = don't WAIT on I/O Worker Threads = don't BLOCK on CPU Most devs know the first. Few use the second — and that's where the real performance wins hide. Have you used Worker Threads in production? Drop your use case below 👇 #ImmediateJoiner #NodeJS #JavaScript #WorkerThreads #BackendDevelopment #Performance #MERNFullStackDeveloper
To view or add a comment, sign in
-
Stop treating Inertia.js like a standard SPA. It’s killing your scalability. 🔄 After 10 years in the Laravel and Vue.js ecosystem, I’ve realized that the "magic" of Inertia is where most architectural debt actually begins. It’s incredibly easy to get started, but when you scale to thousands of simultaneous users, your middleware choices matter more than your framework features. The most common architectural failure I see? Treating the HandleInertiaRequests middleware like a global dumping ground for shared data. The "Senior" approach requires looking deeper into the stack: Shared Data Bottlenecks: If you aren't aggressively using Lazy Props (Inertia::lazy) for heavy datasets or user-specific context, you are adding unnecessary overhead to every single request. You’re effectively DDoS-ing your own API with every page visit. State Management: Real optimization is knowing exactly when a heavy computation belongs in a Laravel Resource (server-side) vs. a Vue computed property (client-side) to keep the browser's main thread free for interaction. The "Silent" Failures: It’s about utilizing Partial Reloads properly and implementing Manual Visit Cancellations to prevent UI race conditions when users click faster than your server can respond. I’ve found that building for the "happy path" is easy. Building for the "scale path" is where the real engineering happens. Fellow Laravel/Vue devs: How are you handling massive shared data payloads in Inertia? Are you a fan of Lazy Props, or do you prefer keeping the middleware lean and using XHR for the heavy lifting? 👇 #Laravel #VueJS #InertiaJS #FullStackArchitecture #WebPerformance #SoftwareEngineering #PHP #WebDev
To view or add a comment, sign in
-
-
🚀 We stopped using Redux for server state — and built everything on TanStack Query instead. In a large Next.js 15 + React 19 app, this architecture scaled surprisingly well. Here’s what worked 👇 🏭 Query factories > raw hooks We wrapped useQuery / useInfiniteQuery into small factories. → Consistent query keys → Easy cache updates (setQueryData) → Simple invalidation No more scattered queryKey arrays across the codebase. 💾 Selective cache persistence (not everything!) Only important queries are saved to IndexedDB using a marker in the key. → No bloated cache → Fully controlled persistence ♾️ Virtual + infinite scrolling (game changer) We combined infinite queries with virtualization. 👉 The key idea: The virtualizer decides what to fetch — not the UI. This made large tables and kanban boards feel instant, even with thousands of rows. 📊 Reusable table layer Our table doesn’t care about data type. We inject a hook that returns paginated data. → Same table works for users, pipelines, or anything else → Clean separation of UI and data logic 🔄 Real-time updates without refetching WebSocket events directly update the cache using setQueryData. → UI updates instantly → No polling → One single source of truth 🔑 Simple invalidation We created a small utility with named invalidation helpers. → No one remembers query keys → Mutations stay clean 💡 Big takeaway Server state does NOT need Redux. TanStack Query already solves caching, syncing, and real-time updates — you just need to structure it well. #TanStack #ReactQuery #NextJS #React #Frontend #WebDev #JavaScript #TypeScript
To view or add a comment, sign in
-
-
🚀 Why You Should Use React Query (TanStack Query) in Your Next Project If you're still managing server state manually with useEffect + useState… you're making life harder than it needs to be. Here’s why React Query is a game-changer 👇 🔹 1. Smart Data Fetching React Query handles caching, background updates, and synchronization automatically — no need to write repetitive API logic. 🔹 2. Built-in Caching Data is cached by default, which means faster UI and fewer unnecessary API calls. 🔹 3. Automatic Refetching It can refetch data in the background when the window refocuses or network reconnects. 🔹 4. Easy Loading & Error States No more manual flags — React Query gives you clean states like isLoading, isError, isSuccess out of the box. 🔹 5. Pagination & Infinite Scroll Handling pagination becomes super simple with built-in support. 🔹 6. Better Developer Experience Cleaner code, less boilerplate, and improved maintainability. 💡 In short: React Query lets you focus on building features instead of managing server state. Have you tried React Query yet? Share your experience 👇 #ReactJS #WebDevelopment #Frontend #JavaScript #Programming #Developers #Tech
To view or add a comment, sign in
-
-
If you're a React + Node.js + Express.js developer, one ecosystem you should know in 2026: TanStack. It saves you from: Too many useEffects Multiple useStates for loading, error, data Manual caching headaches Repeated boilerplate in every component Before, my code looked like : useEffect + multiple useStates + copy-paste logic everywhere. Then I tried TanStack — and it changed my approach. What you get: ⚡ TanStack Query Auto caching, loading, error handling — less code, better performance ⚡ TanStack Router Type-safe routing, fewer runtime bugs ⚡ TanStack Table Built-in sorting, filtering, pagination ⚡ TanStack Start Full-stack capabilities without extra backend setup The shift: Stop thinking how to fetch data Start thinking what your app needs Link : https://lnkd.in/d5WEzUwr Still writing custom fetch logic in 2026? Try TanStack Query. One weekend is enough. #MERNStack #TanStack #ReactJS #JavaScript #WebDevelopment #NodeJS #MongoDB
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Muhammad Faheem Hassan In larger apps like dashboards or SaaS tools, do you rely more on Next.js fetch caching or shift most of the responsibility to client-side caching like TanStack Query?