Found a fascinating behavior of the Fetch API while debugging a production streaming issue today. If you work with LLMs or real-time data, you need to know this: The Scenario I had a backend endpoint that was "polymorphic": 1. If credentials were low, it returned a JSON error. 2. If everything was fine, it returned a Text Stream (for that snappy AI typing effect). The Bug I tried to parse the response as JSON first to check for errors. If that failed, I assumed it was a stream and tried to read it. Result? The stream was empty, and the browser threw a "Locked or Disturbed" error. Why does this happen? A Fetch response body isn't just a variable; it’s a ReadableStream. Think of it like a one-way conveyor belt: A. The "Lock": Once you call .json(), .text(), or .blob(), the browser attaches a "reader" to that conveyor belt. To protect memory, a stream can only have one reader at a time. B. No Rewind: Once those bytes are pulled off the belt and turned into a JavaScript object, they are gone. You can't "rewind" the belt to read them again as a stream. The stream is now "disturbed." The Fix: .clone() If you need to "peek" at the data without destroying the original stream, you must use the .clone() method. const response = await fetch('/api/stream'); // Create an identical twin of the response const clone = response.clone(); try { // Use the clone to "peek" for JSON errors const data = await clone.json(); handleError(data); } catch { // If parsing the clone fails, the ORIGINAL response is still // "undisturbed" and ready to be streamed to the UI! return response.body.getReader(); } The Lesson: Streams are built for performance and memory efficiency, not flexibility. If you need to read a response body twice, .clone() is your best friend. #WebDev #JavaScript #Frontend #CodingTips #SoftwareEngineering #Fetch #BrowserAPI
Fetch API Stream Locking: Understanding the 'Locked or Disturbed' Error
More Relevant Posts
-
Day 92 of me reading random and basic but important dev topicsss..... Today I read about the modern Fetch API.... If you are still reaching for XMLHttpRequest or unnecessarily bundling heavy external libraries for simple network calls, it's time to leverage the native power of fetch(). It’s modern, versatile, and built directly into all modern browsers. Here is everything every dev need to know about the anatomy of a Fetch request. 1. The Two-Stage Process Getting a response with fetch() isn't a single step; it’s a two-stage promise resolution: Stage 1: The Headers Arrive The promise returned by fetch(url) resolves with a Response object the moment the server responds with headers - before the full body downloads. This is where you check the HTTP status. Note: A 404 or 500 error does NOT reject the promise. A fetch promise only rejects on network failures. Always check response.ok (returns true for 200-299 statuses) or response.status! Stage 2: Reading the Body To actually get the data, you need to call an additional promise-based method on the response. Fetch gives you multiple ways to parse the body: * response.json() - parses as JSON (most common) * response.text() - returns raw text * response.blob() - for binary data with types (like downloading an image) * response.formData() - for multipart/form-data * response.arrayBuffer() - for low-level binary data 2. The Already Consumed Trap Here is a classic gotcha that trips up many developers: We can only read the body once. If we call await response.text() to debug or log the output, and then subsequently call await response.json(), your code will fail. The stream has already been consumed! Summary of a standard GET request: let response = await fetch('https://lnkd.in/e4utYKVK'); if (response.ok) { let data = await response.json(); console.log(data); } else { console.error("HTTP-Error: " + response.status); } Keep Learning!!!! #JavaScript #WebDevelopment #SoftwareEngineering #FetchAPI #FrontendDev
To view or add a comment, sign in
-
-
Your API returns JSON and you just JSON.parse() it straight into your app. Congrats, you've just imported a bug you didn't write. Two issues kill production apps silently: unvalidated payload shapes and big integer precision loss. When a backend sends { "id": 9007199254740993 }, JavaScript quietly rounds it. You never notice until a wrong record gets updated. The fix? Use a JSON reviver or a library like json-bigint to handle numeric precision at parse time: import JSONbig from "json-bigint"; const data = JSONbig.parse(response); console.log(data.id.toString()); // "9007199254740993" - exact For shape validation, parse your data through a schema validator like Zod immediately after parsing. Never trust the shape just because it parsed without throwing. JSON.parse only tells you the string is valid JSON - it says nothing about whether the data is what your code expects. Practical takeaway: treat every JSON.parse call as an untrusted boundary. Validate shape, handle large numbers explicitly, and fail loudly at the edge - not deep inside your business logic. How are you currently handling big integers or payload validation in production? #JavaScript #WebDevelopment #Frontend #NodeJS #SoftwareEngineering #CodeQuality
To view or add a comment, sign in
-
Day 98 of me reading random and basic but important dev topicsss.... Today I read about the abort request in fetch..... By default, JavaScript Promises don't have a built-in "cancel" button. Once a fetch() is fired, it wants to run to completion, which can eat up bandwidth, cause memory leaks, or create race conditions in your UI if the data arrives after the user has moved on. Enter the native Web API superhero: AbortController.... Here's how it works under the hood: An AbortController is a simple object with a single method (abort()) and a single property (signal). When you call controller.abort(), the signal object emits an "abort" event, and its aborted property becomes true. Because modern fetch is designed to integrate seamlessly with this API, it actively listens for that exact signal! Here is the standard recipe: 1. Create a new controller instance. 2. Pass its signal as an option to your fetch. 3. Call controller.abort() when the request is no longer needed (e.g. component unmounts, user hits a "Cancel" button, or a timeout is reached). the implementation: // 1. Initialize the controller const controller = new AbortController(); // Let's set a timeout to cancel the request after 1 second setTimeout(() => controller.abort(), 1000); try { // 2. Pass the signal to fetch const response = await fetch('/api/heavy-data', { signal: controller.signal }); console.log("Data loaded!", await response.json()); } catch (err) { // 3. Handle the AbortError specifically if (err.name === 'AbortError') { console.log(" Request was successfully aborted!"); } else { console.error("Fetch failed:", err); } } Note- Always handle that AbortError in your catch block! When fetch aborts, it intentionally rejects the promise. Catching it specifically ensures it doesn't get logged as a false positive in your error tracking software (like Sentry or Datadog). Keep Learning!!!! #JavaScript #WebDevelopment #FrontendEngineering #SoftwareDevelopment #FetchAPI
To view or add a comment, sign in
-
-
A useful mental model: Debounce = state correctness Throttle = time-based sampling Real-world nuance: • Debouncing can feel unresponsive in fast UIs • Throttling can still overwhelm slow APIs if misconfigured • Both can introduce subtle bugs if tied to stale closures in React And sometimes… The right answer is neither. Example: Modern apps often use: • request cancellation (AbortController) • server-side rate limiting • caching layers (React Query, SWR) Because the real problem isn’t just frequency — it’s data flow architecture. Final takeaway: Debounce and throttle are not just utility functions. They are tools to shape how your system reacts to time and user behavior. And choosing the wrong one isn’t just a performance issue — it’s a product experience issue. #frontend #reactjs #javascript #webperformance #softwareengineering
To view or add a comment, sign in
-
tl;dr drive-by AI pull requests can work if you've automated your submission pipeline. Got my first drive-by AI-powered pull request today on the reCAPTCHA client. If I was being strict, I would have rejected it because it was addressing six separate issues in one go. However, I ended up straight merging it because with the clean up I'd already done, the unit tests (PHPUnit), the linter (PHP CS Fixer), and the static analysis (PHPStan)—the overall change was still easy through to understand and I could be pretty confident it was all working. Honestly, if it wasn't for the comprehensive description and telltale branch naming, then I wouldn't have known and just assumed it was done enthusiastic clean up. I mean, I commit stuff straight to main, so I can't judge too harshly. Again though, I can do that because the infrastructure is there to validate basic quality. https://lnkd.in/efxXQWxT
To view or add a comment, sign in
-
We had a performance issue that wasn’t about rendering. It was about data fetching. Here’s what went wrong 👇 Problem: → Same API called multiple times → Unnecessary network load → Slower UI Root cause: → No caching strategy → API calls triggered on every render → Duplicate requests across components What I did: → Introduced caching (React Query) → Centralized data fetching logic → Avoided duplicate calls Result: → Reduced API load → Faster UI → Better data consistency Insight: Performance is not just rendering. It’s also how efficiently you fetch data. #ReactJS #DataFetching #Frontend #SoftwareEngineering #CaseStudy #JavaScript #Performance #WebDevelopment #Engineering #Optimization #FrontendDeveloper
To view or add a comment, sign in
-
𝗖𝘂𝗿𝘀𝗼𝗿 𝘃𝘀 𝗢𝗳𝗳𝘀𝗲𝘁 𝗣𝗮𝗴𝗶𝗻𝗮𝘁𝗶𝗼𝗻 𝗪𝗵𝘆 𝗬𝗼𝘂𝗿 𝗔𝗣𝗜 𝗡𝗲𝗲𝗱𝘀 𝗮𝗻 𝗨𝗽𝗴𝗿𝗮𝗱𝗲 If you're still using offset pagination… your API might already be slowing down Offset Pagination (The Old Way) 𝘎𝘌𝘛 /𝘶𝘴𝘦𝘳𝘴?𝘱𝘢𝘨𝘦=3&𝘭𝘪𝘮𝘪𝘵=10 Looks simple, but: • Gets slower as data grows • Can return duplicate or missing records • Breaks when data updates in real-time Cursor Pagination (The Upgrade) 𝘎𝘌𝘛 /𝘶𝘴𝘦𝘳𝘴?𝘤𝘶𝘳𝘴𝘰𝘳=𝘢𝘣𝘤123 Instead of pages, it uses a reference point (cursor). Why Cursor Pagination Wins • Faster queries (no row skipping) • Consistent results • Scales to millions of records • Perfect for feeds, chats, notifications If you're building: Infinite scroll Notifications Real-time apps 𝗢𝗳𝗳𝘀𝗲𝘁 𝗽𝗮𝗴𝗶𝗻𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗮 𝗯𝗮𝗱 𝗰𝗵𝗼𝗶𝗰𝗲. Final Thought Offset pagination is easy. Cursor pagination is what production systems use. If your app is growing… this upgrade is not optional anymore. #Backend #API #SystemDesign #WebDevelopment #Scalability #NodeJS #Database #SoftwareEngineering #NestJS #ExpressJS #FullStack #Frontend #Angular #ReactJS #Microservices #DistributedSystems #CloudComputing #Performance #AI #AIAgents #AIEngineering #WebArchitecture #RemoteWork #TechJobs #Developers #Coding
To view or add a comment, sign in
-
🤯 I wrote an article about why you don't need useEffect (like the docs). Then I turned it into an AI skill that enforces it automatically 👇 Every time I post a React code snippet, someone tells me I shouldn't use useEffect there. And every time, someone else argues I should. So I wrote the definitive guide. One question decides everything: is this syncing with an external system? If the answer is no, there's a better alternative for every case: 1️⃣ Transforming data? Just calculate it during render. The useEffect version renders with stale data first, then re-renders with the correct data. Two passes for something that could be zero. 2️⃣ Resetting state on prop change? Use the key prop instead of watching props in an effect. React unmounts the old component and mounts a new one, automatically resetting all state, including child components you forgot about. 3️⃣ Chaining effects? When selecting a country triggers an effect that resets the city, which triggers another effect that resets the district, you get three re-renders painting intermediate states nobody needs. Move all downstream resets into the event handler that started the chain, and React batches them into a single render. 4️⃣ Fetching data? useEffect works here, technically, since the network is an external system. But you'll need to handle race conditions, caching, deduplication, and background revalidation yourself. At that point you've built TanStack Query, so just use TanStack Query. 5️⃣ Subscribing to an external store? useSyncExternalStore prevents tearing during concurrent renders, something your useEffect version won't catch until your app gets complex enough for React to split work across frames. I also packaged the full decision tree as a Claude Code skill. Two commands to install, and it forces the AI to ask the same question you should be asking before every useEffect. The article covers the ESLint plugin for catching these patterns mechanically too, which is honestly more useful long-term than any article because it keeps working after the person who read this leaves the project. Link to the full article in the comments 👇 Are you still reaching for useEffect by default, or have you already trained yourself out of it? #React #JavaScript #WebDev #CleanCode #TanStackQuery
To view or add a comment, sign in
-
-
If you're still using JSON.parse on a 50MB API response, you're blocking the main thread and silently hurting your app's performance. JSON.parse is synchronous. It loads the entire payload into memory before you can touch a single byte. For large datasets, that's a guaranteed bottleneck. The fix? Stream parse it instead. Using the Web Streams API with a streaming JSON parser like @streamparser/json, you can process data as it arrives: const parser = new JSONParser(); parser.onValue = ({ value }) => console.log(value); fetch('/api/large-data') .then(res => res.body.pipeThrough(new TextDecoderStream())) .then(stream => stream.pipeTo(new WritableStream({ write(chunk) { parser.write(chunk); } }))); This approach lets you start processing records before the full payload even lands. Practical takeaway - if your payload exceeds 1MB, streaming should be your default, not your fallback. Most developers reach for JSON.parse out of habit, not necessity. The tooling to do better has been available for years. Are you stream parsing in production, or is JSON.parse still your go-to? #JavaScript #WebDevelopment #Performance #WebStreams #FrontendEngineering #JSOptimization
To view or add a comment, sign in
-
Your JSON.parse reviver is a security and maintainability liability you're probably ignoring. Revivers feel clever until you're debugging silent data coercions at 2am. They mutate deserialized data in ways that are hard to trace, test, or trust. Here's what most devs still write: const data = JSON.parse(rawJson, (key, value) => key === "createdAt" ? new Date(value) : value ); No validation. No type safety. Just hope. A schema validator gives you explicit contracts instead: const result = UserSchema.safeParse(JSON.parse(rawJson)); if (!result.success) throw new Error("Invalid payload shape"); This approach separates concerns cleanly - parsing happens first, validation happens second, and your errors are structured and meaningful. Revivers also don't catch missing fields, wrong types, or unexpected shapes. Schema validation does all three. Practical takeaway - treat every external JSON payload as untrusted input and validate its shape explicitly before your app logic ever touches it. Are you still using revivers in production, or have you moved to schema-first validation? #JavaScript #WebDevelopment #TypeSafety #FrontendEngineering #NodeJS #CodeQuality
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development