If you're still using JSON.parse on a 50MB API response, you're blocking the main thread and silently hurting your app's performance. JSON.parse is synchronous. It loads the entire payload into memory before you can touch a single byte. For large datasets, that's a guaranteed bottleneck. The fix? Stream parse it instead. Using the Web Streams API with a streaming JSON parser like @streamparser/json, you can process data as it arrives: const parser = new JSONParser(); parser.onValue = ({ value }) => console.log(value); fetch('/api/large-data') .then(res => res.body.pipeThrough(new TextDecoderStream())) .then(stream => stream.pipeTo(new WritableStream({ write(chunk) { parser.write(chunk); } }))); This approach lets you start processing records before the full payload even lands. Practical takeaway - if your payload exceeds 1MB, streaming should be your default, not your fallback. Most developers reach for JSON.parse out of habit, not necessity. The tooling to do better has been available for years. Are you stream parsing in production, or is JSON.parse still your go-to? #JavaScript #WebDevelopment #Performance #WebStreams #FrontendEngineering #JSOptimization
Ditch JSON.parse for Streaming JSON Parsing in Large Datasets
More Relevant Posts
-
JSON.stringify for deep comparison is quietly breaking your apps - and most developers don't even notice. The problem? Object key order isn't guaranteed in all scenarios. JSON.stringify({a: 1, b: 2}) and JSON.stringify({b: 2, a: 1}) can return different strings for logically identical objects, causing false cache misses, unnecessary re-renders, and subtle state bugs. Stable hashing solves this. Libraries like object-hash or fast-stable-stringify serialize keys in a consistent, sorted order before hashing. Here's a quick example: import stableStringify from 'fast-stable-stringify'; const a = { b: 2, a: 1 }; const b = { a: 1, b: 2 }; stableStringify(a) === stableStringify(b); // true - always Compare that to JSON.stringify, where the same check can silently return false depending on how your objects were constructed. Practical takeaway - anywhere you use stringified objects as cache keys, memoization dependencies, or comparison tokens, swap to a stable serializer. It costs almost nothing and eliminates an entire class of hard-to-reproduce bugs. Are you still using JSON.stringify for comparisons in production, or have you already moved to something more reliable? #JavaScript #WebDevelopment #Frontend #Performance #CleanCode #JSPatterns
To view or add a comment, sign in
-
This week, I blamed the API 3 times. I was wrong twice. We had inconsistent data across screens. My first reaction - “API issue.” Turned out: I was mutating cached data locally I was sending different params from two components to the same endpoint “Slow API” was actually my re-render logic making it feel slow One case actually was backend. But by then, I’d already lost confidence in my own assumptions. What hit me: I wasn’t debugging the system. I was defending my layer. Now I check: 𝗿𝗲𝗾𝘂𝗲𝘀𝘁 𝗽𝗮𝘆𝗹𝗼𝗮𝗱 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝘀𝗵𝗮𝗽𝗲 𝘀𝘁𝗮𝘁𝗲 𝗳𝗹𝗼𝘄 𝘁𝗶𝗺𝗶𝗻𝗴 …before pointing fingers. Blaming the API is easy. Proving it’s not your code is harder. Anyone else been here? 👇 #ReactJS #Debugging #FrontendDevelopment #WebDevelopment #SoftwareEngineering #FrontendArchitecture
To view or add a comment, sign in
-
For the longest time, my pattern looked like this: • useEffect → call API • useState → store data • loading + error states manually handled It worked… until the app grew. Then came the problems: • duplicate API calls • inconsistent loading states • manual caching (or no caching at all) • refetching logic scattered everywhere That’s when I switched to React Query. — What changed? Server state ≠ UI state React Query made this distinction clear. Caching became automatic Data stays fresh without unnecessary refetching. Background updates UI stays responsive while data syncs silently. Built-in loading & error handling No more boilerplate in every component. Refetching is declarative Not tied to lifecycle hacks anymore. — The biggest mindset shift: Stop thinking: “Where should I fetch this data?” Start thinking: “How should this data behave over time?” — Final takeaway: React Query is not just a library. It’s a different way of thinking about data in frontend. And once you get it, going back to useEffect feels… painful 😅 #reactjs #frontend #javascript #webdevelopment #reactquery #softwareengineering
To view or add a comment, sign in
-
Stop Writing useEffect for Data Fetching,TanStack Query Does It Better If you're still using useEffect + useState to fetch data in React, you're writing more code than you need to. Here's the honest comparison: With useEffect, you handle: loading state, error state, cleanup, race conditions, refetching on focus, caching... manually. Every time. With TanStack Query, you get all of that out of the box in one hook. The mental model shift is simple: stop thinking about "syncing state" and start thinking about "server state vs client state." TanStack Query was built exactly for server state data that lives outside your app and needs to stay fresh. What you actually get for free: → Automatic background refetching → Request deduplication → Stale-while-revalidate caching → Retry on failure → Pagination & infinite scroll helpers → DevTools built in This isn't about hype it's about writing less boilerplate and shipping more reliable UIs. Your future self (and your teammates) will thank you. #ReactJS #TanStackQuery #ReactQuery #FrontendDevelopment #JavaScript #SoftwareEngineering #CleanCode
To view or add a comment, sign in
-
-
Your API returns JSON and you just JSON.parse() it straight into your app. Congrats, you've just imported a bug you didn't write. Two issues kill production apps silently: unvalidated payload shapes and big integer precision loss. When a backend sends { "id": 9007199254740993 }, JavaScript quietly rounds it. You never notice until a wrong record gets updated. The fix? Use a JSON reviver or a library like json-bigint to handle numeric precision at parse time: import JSONbig from "json-bigint"; const data = JSONbig.parse(response); console.log(data.id.toString()); // "9007199254740993" - exact For shape validation, parse your data through a schema validator like Zod immediately after parsing. Never trust the shape just because it parsed without throwing. JSON.parse only tells you the string is valid JSON - it says nothing about whether the data is what your code expects. Practical takeaway: treat every JSON.parse call as an untrusted boundary. Validate shape, handle large numbers explicitly, and fail loudly at the edge - not deep inside your business logic. How are you currently handling big integers or payload validation in production? #JavaScript #WebDevelopment #Frontend #NodeJS #SoftwareEngineering #CodeQuality
To view or add a comment, sign in
-
Your clean async/await code might be quietly adding 10 seconds to your load time 🦥 We had 15 API calls in SSR. It looked fine. It took 12.8s. While digging into it, I noticed most of them were running sequentially… even though they didn’t need to be. The fix? We didn’t change a single API. Just how they run. • Promise.all → run critical data in parallel ⚡ • Promise.allSettled → don’t let non-critical data block 🛡️ New load time: 2.5s (–80%) 🚀 Same APIs. Same backend. Way faster product. Quick rule of thumb: • Independent + critical → Promise.all • Independent + non-critical → Promise.allSettled • Dependent → await sequentially If you’re learning async/await, this is a trap worth avoiding early. — Honored to have this story featured on tiket.com's Medium publication. Read the full breakdown here: https://lnkd.in/gkhju-e5 #SoftwareEngineering #WebPerformance #JavaScript
To view or add a comment, sign in
-
I rebuilt my portfolio from scratch. Not because the old one was broken. Because I wanted to find out what it actually feels like to ship a real app with an AI as my pair programmer. Two weeks. Next.js 16, Supabase, Drizzle, Tailwind v4, Claude Code as the second engineer in the room. The twist: it has an admin dashboard with role-based access. You can sign in as a visitor and click around the CRUD, the drag-and-drop reordering, the section editors, the whole thing, without being able to break anything I actually shipped. Building that read-only mode honestly was harder than building the public site. A few honest things from the build: → Next.js 16 made params and cookies async. My first dev server crash was a one-line fix and a five-minute "wait, why." → Drizzle's relational query API made me stop missing Prisma faster than I expected. → Claude Code is genuinely good at the boring 80%. It is genuinely bad at knowing when to stop. Managing that gap is the actual skill now. → I almost shipped an RLS policy that would have let any logged-in user edit my data. I caught it the night before launch. That story is its own post. I'm going to write up the lessons over the next few weeks. The Next 16 footguns, the Claude Code workflow that actually worked, the role-based dashboard teardown, and the one mistake I'm still slightly embarrassed about. If there's a specific part you want me to break down first, the Supabase setup, the prompting workflow, the async params migration, or the read-only mode, tell me which and I'll start there. #nextjs #supabase #buildinpublic #portfolio #fullstack #TAP
To view or add a comment, sign in
-
-
🚫𝗜𝗳 𝘆𝗼𝘂'𝗿𝗲 𝘂𝘀𝗶𝗻𝗴 𝘂𝘀𝗲𝗦𝘁𝗮𝘁𝗲 𝗳𝗼𝗿 𝘁𝗵𝗶𝘀… 𝘀𝘁𝗼𝗽. I still see this all the time in React codebases… (and yeah, I used to do it too 😅) It looks fine at first. But it slowly creates: → duplicated state → unnecessary re-renders → effects you didn’t really need I ran into this recently while working on an analytics feature. Refactored it in 2 minutes → cleaner code, fewer bugs. Rule I follow now: If you can derive it from state or props… just derive it. Fewer states = fewer problems. Curious: What’s a “why is this even state?” pattern you’ve seen before?
To view or add a comment, sign in
-
-
Something clicked for me recently about React Server Components that I wish someone had explained earlier. Most explanations start with "it renders on the server" — which is technically true but completely misses the point. Server-Side Rendering already did that. The real shift is different. RSC renders on the server and sends a serialized component tree — no JavaScript shipped, no hydration needed for server components. SSR renders HTML on the server but still ships the full JS bundle to the client for hydration. That distinction changes everything about how you think about bundle size and performance. The mental model that finally made it click for me: your code can interact directly with your database or file system without exposing sensitive logic to the client, eliminating the need for complex API layers for simple data fetching tasks. No useEffect. No loading state. No API route just to get data into a component. Just async/await directly in your component — and none of that logic ever reaches the browser. The core challenge is that React Server Components are not an optimization layer — they are an architectural boundary. Teams that treat them like a drop-in performance fix run into problems. Teams that rethink their component structure around them get the real benefits. AI is changing how we write React code, not what we build with it. The architectural decisions — when to adopt Server Components, how to structure state, which rendering patterns fit — those still require human judgment. If you're building with Next.js and haven't sat down with RSC properly yet, that's the investment worth making this week. #React #WebDevelopment #SoftwareDevelopment #NextJS #Frontend #JavaScript #RSC #DeveloperProductivity
To view or add a comment, sign in
-
Nobody knew what was wrong. Users were hitting timeout alerts. Pages were hanging. We checked everything — API responses, network calls, server logs. Everything looked fine on paper. We were stumped. Then one of my teammates opened React DevTools Profiler. We were shocked. Every single component was re-rendering. Not once. Not twice. On every minor state update, the entire tree was lighting up. The app was redrawing itself from scratch every few seconds — and the API never even got a fair chance to respond before the UI had already timed out. The answer was in our React code the whole time. We just weren't looking there. Here's what was actually causing it: 1. useEffect chains nobody noticed: Effect A updated state → triggered Effect B → triggered Effect C. Three effects. One user action. Dozens of cascading re-renders. Silent, invisible, and absolutely devastating for performance. 2. No memoization where it mattered: Expensive computations — filtering large datasets, transforming API responses — running fresh on every single render. The UI was doing work it had already done, over and over. 3. State living too high in the tree: A state update at the top was trickling down and forcing re-renders in components that had nothing to do with that state change. 4. Components doing too much: UI, data fetching, and transformation logic all in one place — so any change anywhere triggered everything everywhere. The fix wasn't one big refactor. It was systematic: — Audited and collapsed redundant useEffect chains — Moved state closer to where it was actually used — Added useMemo and useCallback only where Profiler confirmed the cost — Separated data logic from render logic Result: timeout alerts gone. Page load dropped significantly. Same API. Same backend. Zero infrastructure changes. The backend was never the problem. One thing I now tell every developer of my team: "Profile first. Fix second. Always." Because the bug you assume is never the bug that's actually there. Have you ever spent days chasing a bug that turned out to be somewhere you never expected? 👇 #React #ReactJS #Frontend #MERN #JavaScript #TypeScript #WebPerformance #SoftwareEngineering
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development