Day 91 of me reading random and basic but important dev topicsss... Yesterday I read about how to capture File objects. Today, I read about how to actually look inside them.... Enter: The FileReader API. FileReader is a built-in object with one sole purpose: reading data from Blob (and File) objects asynchronously. Because reading from a disk can take time, it delivers the data using an event-driven model. Here is the complete breakdown of how to wield it..... The 3 Core Reading Methods: The method we choose depends entirely on what we plan to do with the data.... 1. readAsText(blob, [encoding]) - Perfect for parsing CSVs or text files into a string. 2. readAsDataURL(blob) - Reads the binary data and encodes it as a base64 Data URL. (Ideal for immediately previewing an uploaded <img> via its src attribute). 3. readAsArrayBuffer(blob) - Reads data into a binary ArrayBuffer for low-level byte manipulation. (Note: You can cancel any of these operations mid-flight by calling reader.abort()) The Event Lifecycle: As the file reads, FileReader emits several events. The most common are load (success) and error (failure), but we also have access to: * loadstart (started) * progress (firing continuously during the read) * loadend (finished, regardless of success/fail) let reader = new FileReader(); reader.readAsText(file); reader.onload = () => console.log("Success:", reader.result); reader.onerror = () => console.log("Error:", reader.error); The Fast-Track: If your only goal is to display an image or generate a download link, skip FileReader entirely! Use URL.createObjectURL(file). It generates a short, temporary URL instantly without needing to read the file contents into memory. Web Workers: Dealing with massive files? You can use FileReaderSync inside Web Workers. It reads files synchronously (returning the result directly without events) without freezing the main UI thread! Keep Learning!!!!! #JavaScript #WebAPI #FrontendDev #WebArchitecture #Coding
Mastering FileReader API for Async File Reading
More Relevant Posts
-
📘 𝐉𝐚𝐯𝐚𝐒𝐜𝐫𝐢𝐩𝐭 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐌𝐨𝐝𝐮𝐥𝐞 (𝐁𝐚𝐬𝐢𝐜) 𝐒𝐞𝐜𝐭𝐢𝐨𝐧 2: 𝐕𝐚𝐫𝐢𝐚𝐛𝐥𝐞𝐬 1.What is a variable? 2.Why do we use a variable? 3.How to declare a variable? 4.Tell me about variable declaration rules? 5.How many types of variables do you know? 6.When do we use var? 7.When do we use let? 8.When do we use const? 9.How to create an undefined variable? 10.What is an undefined variable? 11.What is undefined? 12.What is NaN? 13.What is null? 14.What is concatenation? 15.What is Infinity? 16.How to assign data to a variable / How to assign a variable data? 17.Variable is primitive or non-primitive? 🎯 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬 (𝐄𝐱𝐭𝐫𝐚) 1.Difference between var, let, and const? 2.What is variable hoisting? 3.Why can var be accessed before declaring it? 4.What is temporal dead zone (TDZ)? 5.Can we reassign const variable? 6.Why shouldn't modern JavaScript use var? 𝐒𝐞𝐜𝐭𝐢𝐨𝐧 3: 𝐉𝐚𝐯𝐚𝐒𝐜𝐫𝐢𝐩𝐭 𝐃𝐚𝐭𝐚 𝐓𝐲𝐩𝐞𝐬 & 𝐊𝐞𝐲𝐰𝐨𝐫𝐝𝐬 1.JavaScript data types? 2.What is a reserved keyword? 3.What is a special keyword? 4.How can check type data type? 5.JavaScript variables is case-sensitive? 6.JavaScript variable naming conventions? 7.How to convert string ("20") to number (20)? 8.JavaScript built-in functions? 🎯 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧𝐬 (𝐄𝐱𝐭𝐫𝐚) 1.Difference between primitive and reference types? 2.What is type coercion? 3.Difference between null and undefined? 4.What is typeof null / What is the output of typeof null and why? (Important trick question) 5.What is the difference in memory management between primitive and reference type data? #DotNet #AspNetCore #MVC #FullStack #SoftwareEngineering #ProgrammingTips #DeveloperLife #LearnToCode #JavaScript #JS #JavaScriptTips #JSLearning #FrontendDevelopment #WebDevelopment #CodingTips #CodeManagement #DevTools
To view or add a comment, sign in
-
-
The Try-Catch Time Trap: Why Async Errors Escape? Lets look at this code that reads and parse the invalid JSON file. Sync code try { const data = fs.readFileSync("invalid.json", "utf-8"); const jsonData = JSON.parse(data); } catch (err) { console.log("error caught:", err.message); // catches parsing error } All good. --- Async code try { fs.readFile("invalid.json", "utf-8", (err, data) => { const jsonData = JSON.parse(data); }); } catch (err) { console.log("error caught:", err.message); // does NOT even run } Catch block won't execute. Now the question is… 👉 Why? --- Here’s how I started thinking about it: If JS finds an error → it stops execution → looks for a catch block in the current call stack → if not found, it bubbles up the stack --- In sync code: 👉 Everything runs in one continuous stack → error happens → catch block is right there → so it works --- But async changes things. When this line runs: fs.readFile(..., callback) 👉 JS does NOT execute the callback immediately Instead: → it registers the callback → pushes it to the event loop → and moves on --- Now important part 👇 👉 The current call stack finishes execution → which means try-catch is gone --- Later… 👉 when file reading is done → event loop pushes the callback to the call stack → callback runs But now: 👉 this is a new call stack And the old try-catch? 👉 already gone. --- So when error happens inside callback: ❌ there is no catch block anymore --- That’s when it clicked for me: 👉 try-catch works only within the same execution stack Not across time. Not across async boundaries. #JavaScript #Node #Programming #ErrorHandling #Interview #Eventloop #Callstack
To view or add a comment, sign in
-
Your FastAPI backend is fast to build. But is it fast to run? Most developers find out the answer at the worst possible moment when real users hit it at the same time. Endpoints slow down. Requests pile up. Users drop off. Not because the code is wrong. Because it is blocking. Here is what blocking actually looks like in production: Your user hits an endpoint. FastAPI calls the database. That query takes 200ms. During those 200ms your server is frozen. Not slow. Frozen. Every other request sits in a queue waiting for that one query to finish. 100 users hit your API at the same time. User 1 gets served. Users 2 to 100 wait in line. That is sync. That is blocking I/O. FastAPI was built to never work that way. With async/await while your database query runs in the background, your server is already picking up the next request. And the next. And the next. 200ms of database wait becomes invisible to every other user. In real backend terms. SYNC — blocks: def get_orders(user_id: int): return db.query(user_id) ASYNC — non blocking: async def get_orders(user_id: int): return await db.query(user_id) Same logic. Same database. Same server. But now 100 users get served in the time it used to take to serve 1. This matters even more when your endpoints call external services. 1. Payment gateway 300ms wait. 2. AI model response 2 to 3 seconds wait. 3. Email service 500ms wait. Sync every user feels every millisecond of every one of those waits. with Async none of them do. FastAPI gives you non-blocking I/O natively. No extra setup. No plugins. No workarounds. Just write async. Add await. Let FastAPI handle the rest. Your backend was already fast to build. Now make it fast to run. Are you using async endpoints in your FastAPI projects? 👇 #FastAPI #Python #BackendDevelopment #AsyncProgramming #SoftwareEngineering #APIDesign #PythonDeveloper #WebDevelopment #TechIn2026 #BuildInPublic
To view or add a comment, sign in
-
-
Most web scrapers fail because they skip the reconnaissance phase. I've seen engineers spend 3 days debugging a scraper that could've been designed correctly in 3 hours. The mistake? Writing code before understanding the website's architecture. Here's the reconnaissance framework I follow before writing any scraper: 1. Network Tab First Watch XHR/Fetch requests. Often, the data you need is already in JSON format from an internal API. No need to parse HTML. 2. Inspect Authentication Flows Check if the site uses cookies, tokens, or session-based auth. Missing this means your scraper works locally but fails in production. 3. Map the DOM Structure Identify stable selectors. Look for data attributes or unique IDs. Class names change frequently during frontend deployments. 4. Test Pagination and Infinite Scroll Understand how data loads. Is it URL-based pagination or JavaScript-triggered? This changes your entire scraping strategy. 5. Check Anti-Scraping Signals Rate limits, CAPTCHAs, user-agent checks, IP blocks. Know what you're dealing with upfront. 6. Validate Data Consistency Scrape the same page multiple times. Does the structure change? Are there A/B tests affecting layout? This reconnaissance phase saves you from writing fragile code that breaks every week. Good scraping isn't about clever code. It's about understanding the system you're extracting data from. What's the most overlooked step when you build scrapers? #WebScraping #Python #Automation #DataEngineering #SoftwareTesting #QAEngineering
To view or add a comment, sign in
-
-
Day 93 of me reading random and basic but important dev topicsss.... Today I read about the POST Requests, Binary Data & Performance in fetch() 1. Headers: Reading and Writing * Reading: response.headers gives you a Map-like object. You can easily get specific headers via response.headers.get('Content-Type') or iterate over all of them using a for...of loop. * Writing: Pass a headers object in the options parameter. Note: The browser exclusively controls certain "forbidden" headers (like Host, Origin, Referer, and Cookie) for security reasons. 2. Making POST Requests To send data, you need to add method and body to your fetch options. When sending a JSON payload, don't forget your Content-Type! By default, sending a string body sets the content type to text/plain. let user = { name: 'John', surname: 'Smith' }; let response = await fetch('/api/user', { method: 'POST', headers: { 'Content-Type': 'application/json;charset=utf-8' }, body: JSON.stringify(user) }); 3. Submitting Binary Data (Images/Files) Fetch handles binary data beautifully. You can pass a Blob or BufferSource directly to the body. If you pass a Blob (for instance, an image generated from an HTML <canvas> using canvas.toBlob), you don't even need to set the Content-Type header manually! The Blob object's built-in type (e.g., image/png) automatically becomes the Content-Type. 4. Pro-Tip: Optimizing Concurrent Fetches When fetching multiple independent endpoints, never map an array of URLs to an array of fetch() calls and then blindly await Promise.all(...) before calling .json(). Why? It forces the JSON parsing to wait for the slowest network request to finish! Instead, attach .then(res => res.json()) directly to each individual fetch promise. This ensures that as soon as any single request finishes, it immediately starts processing its JSON payload without waiting for its siblings. Keep Learning!!!! #JavaScript #WebDevelopment #SoftwareEngineering #WebPerformance #FetchAPI
To view or add a comment, sign in
-
-
Name vs. Slug: They aren't as interchangeable as we sometimes think! 💡 While contributing to the django-taggit project recently, I ran into an interesting edge case that reminded me of a common developer trap: treating a slug as an exact replica of a name. It’s an easy habit to fall into. We usually auto-generate a slug from a name ("My Post" ➡️ "my-post") and then start using them interchangeably in our database queries or business logic. But here is the catch: a slug is a normalized version of a name, not a 1:1 match. Think about tags like "C++" and "C#". Depending on your slugify function, both might normalize to just "c". If your system logic assumes they are identical and queries by slug when it actually needs the exact name, you are going to hit unexpected collisions and data bugs! 🛠️ How to manage them & make the right decision: 📌 Use Name for humans: UI rendering, reports, and anywhere readability is the priority. This is your exact source of truth for display. 📌 Use Slug for systems: URLs, API routing, and SEO-friendly lookups. It’s built for the web, not for exact data representation. The Golden Rule: Before writing a query, ask yourself: "Am I trying to route a web request, or am I trying to display the exact identity of an object?" Have you ever run into a weird bug because of a name/slug collision? Let's discuss in the comments! 👇 #Django #Python #BackendEngineering #OpenSource #SoftwareArchitecture #WebDevelopment
To view or add a comment, sign in
-
-
Ever changed a variable in JavaScript only to realize you accidentally broke the original data too? 🤦♂️ That’s the classic Shallow vs. Deep Copy trap. Here is the "too long; didn't read" version: 1. Shallow Copy (The Surface Level) When you use the spread operator [...arr] or {...obj}, you’re only copying the top layer. The catch: If there are objects or arrays inside that object, they are still linked to the original. Use it for: Simple, flat data. 2. Deep Copy (The Full Clone) This creates a 100% independent copy of everything, no matter how deep the nesting goes. The easy way: const copy = structuredClone(original); The old way: JSON.parse(JSON.stringify(obj)); (Works, but it’s buggy with dates and functions). The Rule of Thumb: If your object has "layers" (objects inside objects), go with a Deep Copy. If it’s just a basic list or object, a Shallow Copy is faster and cleaner. Keep your data immutable and your hair un-pulled. ✌️ #Javascript #WebDev #Coding #ProgrammingTips
To view or add a comment, sign in
-
-
Most web scrapers fail because of what you didn't do before coding. I've debugged countless scraping scripts that broke within days of deployment. The issue? Engineers skipped the reconnaissance phase. Before writing selectors or handling responses, I spend 30 minutes analyzing: How content loads (static HTML vs JavaScript rendering) Inspect Network tab. If critical data appears in XHR/Fetch calls, you're dealing with dynamic content. Scraping the initial HTML will return empty shells. Pagination and infinite scroll patterns Does the site use query parameters, POST requests, or lazy loading? Understanding this determines whether you scrape URLs or reverse-engineer API calls. DOM structure consistency Check multiple pages. If class names change or IDs are auto-generated hashes, your selectors will break. Look for stable semantic tags or data attributes instead. Rate limiting and anti-bot signals Open DevTools and watch request headers. Presence of tokens, fingerprinting scripts, or CAPTCHAs means you need rotation strategies before you start. This upfront analysis saved me from rewriting scrapers multiple times. It turns scraping from guesswork into engineering. The best code is code you don't have to rewrite. What's your first step before building a scraper? #WebScraping #Automation #PythonEngineering #QAEngineering #DataEngineering #DevOps
To view or add a comment, sign in
-
-
Most web scrapers fail because they skip the analysis phase. I've seen teams spend weeks fixing scrapers that break every few days. The root cause? They started coding before understanding the site's architecture. Here's what I do before writing any scraping logic: Inspect the DOM structure thoroughly. Identify stable selectors like data attributes or semantic HTML tags. CSS classes change often, IDs are more reliable, but data attributes are gold. Analyze network traffic in DevTools. Many sites load content through API calls after the initial page render. Scraping the API directly is faster, cleaner, and more stable than parsing rendered HTML. Check for JavaScript rendering requirements. If content appears only after JS execution, you need headless browsers or API interception. Static requests won't work. Identify anti-scraping mechanisms early. Rate limits, CAPTCHAs, request signatures, TLS fingerprinting. Discovering these after deployment is expensive. Document pagination and dynamic loading patterns. Infinite scroll, lazy loading, token-based pagination. Each requires a different strategy. This analysis phase takes 2-3 hours but saves weeks of maintenance. Your scraper's reliability depends more on understanding the system than on your code quality. What's your first step when analyzing a new scraping target? #WebScraping #DataEngineering #Python #Automation #QA #SoftwareTesting
To view or add a comment, sign in
-
-
“set (𝙧𝙚𝙫𝙖𝙡𝙞𝙙𝙖𝙩𝙚: 𝟲𝟬) and move on” This works in Next.js — until it doesn’t. Now you’re: • serving stale data for up to 60s • re-fetching even when nothing changed • adding load for no reason This is where caching stops being an optimization — and becomes about 𝘄𝗵𝗼 𝗼𝘄𝗻𝘀 𝗳𝗿𝗲𝘀𝗵𝗻𝗲𝘀𝘀. At a high level: 𝗦𝘁𝗮𝘁𝗶𝗰 𝗿𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴 → speed 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗿𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴 → freshness 𝗖𝗮𝗰𝗵𝗶𝗻𝗴 + 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 → where we balance the two But this abstraction starts to break down at scale. 👉 𝗧𝗶𝗺𝗲-𝗯𝗮𝘀𝗲𝗱 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 (𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲) periodically refetches data This works well when data becomes stale in predictable intervals Think dashboards, blogs, or analytics snapshots 👉 𝗢𝗻-𝗱𝗲𝗺𝗮𝗻𝗱 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 (𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲𝗧𝗮𝗴, 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲𝗣𝗮𝘁𝗵) flips the model Don't blindly revalidate — react to change In one of our systems, moving from time-based to event-driven invalidation: • reduced redundant fetches significantly • cache behavior became predictable under load This becomes the default once writes are frequent. 👉 𝗙𝘂𝗹𝗹 𝗥𝗼𝘂𝘁𝗲 𝗖𝗮𝗰𝗵𝗲 𝘃𝘀 𝗗𝗮𝘁𝗮 𝗖𝗮𝗰𝗵𝗲 • Full Route Cache → caches the rendered output • Data Cache → caches the underlying fetch calls That separation is powerful: Don't rebuild the entire page — refresh just the data 🧠 𝗠𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆 𝗦𝘁𝗼𝗽 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗶𝗻 𝘁𝗶𝗺𝗲 → 𝘀𝘁𝗮𝗿𝘁 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗶𝗻 𝗲𝘃𝗲𝗻𝘁𝘀 Instead of → “𝘳𝘦𝘷𝘢𝘭𝘪𝘥𝘢𝘵𝘦 𝘦𝘷𝘦𝘳𝘺 𝘟 𝘴𝘦𝘤𝘰𝘯𝘥𝘴” Think → “𝘸𝘩𝘢𝘵 𝘦𝘷𝘦𝘯𝘵 𝘴𝘩𝘰𝘶𝘭𝘥 𝘮𝘢𝘬𝘦 𝘵𝘩𝘪𝘴 𝘥𝘢𝘵𝘢 𝘴𝘵𝘢𝘭𝘦?” ❓Interested to hear how this plays out in write-heavy or multi-region setups. #NextJS #Caching #ReactJS #WebDevelopment #FullStack #JavaScript #SoftwareEngineering #SystemDesign #FrontendDevelopment
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development