🎬 𝘖𝘯𝘦 𝘤𝘰𝘯𝘤𝘦𝘱𝘵. 𝘍𝘰𝘶𝘳 𝘧𝘰𝘳𝘮𝘴. 𝘐𝘯𝘧𝘪𝘯𝘪𝘵𝘦 𝘶𝘴𝘦 𝘤𝘢𝘴𝘦𝘴. You've seen what Streams are. Now let's meet each one properly. ━━━━━━━━━━━━━━━ 🧵 Node.js Streams 𝗧𝗵𝗲 𝟰 𝗙𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹 𝗦𝘁𝗿𝗲𝗮𝗺𝘀 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗱 ━━━━━━━━━━━━━━━ 1️⃣ 𝗥𝗲𝗮𝗱𝗮𝗯𝗹𝗲 𝗦𝘁𝗿𝗲𝗮𝗺 Data flows IN. You consume it chunk by chunk. Example → fs.createReadStream() Key events: → 𝗱𝗮𝘁𝗮 — fires every time a chunk arrives → 𝗲𝗻𝗱 — fires when there's nothing left to read → 𝗲𝗿𝗿𝗼𝗿 — fires if something goes wrong 2️⃣ 𝗪𝗿𝗶𝘁𝗮𝗯𝗹𝗲 𝗦𝘁𝗿𝗲𝗮𝗺 Data flows OUT. You push it somewhere. Example → fs.createWriteStream() Key functions: → 𝘄𝗿𝗶𝘁𝗲() — sends a chunk to the destination → 𝗲𝗻𝗱() — signals no more data will be written → 𝗳𝗶𝗻𝗶𝘀𝗵 event — fires when all data has been flushed 3️⃣ 𝗗𝘂𝗽𝗹𝗲𝘅 𝗦𝘁𝗿𝗲𝗮𝗺 Both Readable AND Writable at the same time. Example → net.Socket (TCP connections) A socket can receive data AND send data simultaneously. Two lanes. One stream. ⚡ 4️⃣ 𝗧𝗿𝗮𝗻𝘀𝗳𝗼𝗿𝗺 𝗦𝘁𝗿𝗲𝗮𝗺 Data comes in → gets modified → goes out. Example → zlib.createGzip() Reads raw data, compresses it on the fly, outputs compressed chunks. No need to load the full file. No memory spike. 🔥 Key function: → 𝗽𝗶𝗽𝗲() — chains streams together like a pipeline ━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗯𝗶𝗴 𝗽𝗶𝗰𝘁𝘂𝗿𝗲: ━━━━━━━━━━━━━━━ Readable → Transform → Writable That's a full streaming pipeline. Read a file → compress it → write it to disk. All in one chain. All chunk by chunk. All production grade. 🚀 Which of the 4 have you used in your projects? Drop it below. 👇 #NodeJS #BackendDevelopment #JavaScript #Streams #Developer
Yaswant Kumar S’ Post
More Relevant Posts
-
Your database is lying to you… and you don’t even know it 👀 Most bugs in production aren’t because of bad queries — they happen because your transactions aren’t designed right ⚠️ And once data breaks, you can’t “debug” it easily 🔥 Transaction ≠ ACID Properties Transaction → A logical unit of work executed in sequence 🧩 ACID Properties → Rules that guarantee your data won’t break under real-world conditions 🛡️ When building real systems, you don’t just use transactions — you rely on ACID to handle consistency, concurrency, and failure scenarios ⚙️ Atomicity → All or nothing (no partial updates) 💥 Consistency → Data stays valid before and after execution ✅ Isolation → Parallel transactions don’t mess with each other 🔒 Durability → Once saved, always saved (even after crashes) 💾 Here’s where most devs mess up ↓ You think “my query works” = system is correct ❌ But in production: – Multiple users hit your DB at the same time 🌍 – Network failures happen 🌐 – Partial writes can corrupt data 💣 That’s where transaction states matter: Active → Queries are running ⚡ Partially Committed → Changes are in memory (not permanent yet) 🧠 Committed → Changes are safely stored 📦 Failed → Something broke mid-way ❗ Aborted → Rollback happened, DB restored 🔄 Terminated → Transaction is done (success or failure) 🏁 This small distinction changes how you design systems. You stop thinking in queries… and start thinking in failure scenarios 🧠 Building systems > memorizing concepts 🚀 What’s one concept developers often misunderstand? 🤔 #fullstackdeveloper #softwareengineering #webdevelopment #javascript #reactjs #backend #buildinpublic #nodejs #nextjs #typescript
To view or add a comment, sign in
-
-
The Try-Catch Time Trap: Why Async Errors Escape? Lets look at this code that reads and parse the invalid JSON file. Sync code try { const data = fs.readFileSync("invalid.json", "utf-8"); const jsonData = JSON.parse(data); } catch (err) { console.log("error caught:", err.message); // catches parsing error } All good. --- Async code try { fs.readFile("invalid.json", "utf-8", (err, data) => { const jsonData = JSON.parse(data); }); } catch (err) { console.log("error caught:", err.message); // does NOT even run } Catch block won't execute. Now the question is… 👉 Why? --- Here’s how I started thinking about it: If JS finds an error → it stops execution → looks for a catch block in the current call stack → if not found, it bubbles up the stack --- In sync code: 👉 Everything runs in one continuous stack → error happens → catch block is right there → so it works --- But async changes things. When this line runs: fs.readFile(..., callback) 👉 JS does NOT execute the callback immediately Instead: → it registers the callback → pushes it to the event loop → and moves on --- Now important part 👇 👉 The current call stack finishes execution → which means try-catch is gone --- Later… 👉 when file reading is done → event loop pushes the callback to the call stack → callback runs But now: 👉 this is a new call stack And the old try-catch? 👉 already gone. --- So when error happens inside callback: ❌ there is no catch block anymore --- That’s when it clicked for me: 👉 try-catch works only within the same execution stack Not across time. Not across async boundaries. #JavaScript #Node #Programming #ErrorHandling #Interview #Eventloop #Callstack
To view or add a comment, sign in
-
GraphQL Series — Day 3 Now that we understand Types… let’s talk about the most powerful feature in GraphQL — Queries 👇 👉 Queries are used to fetch data from the server 👉 You control what data you get 👉 No extra fields, no unnecessary requests 💡 Think of it like this: Instead of multiple API calls… you get everything in one structured request 🔍 How Queries Work 1️⃣ Client sends a query 2️⃣ Server validates it using schema 3️⃣ Resolvers fetch required data 4️⃣ Only requested data is returned 🧠 Key Things to Remember ✔ Always request specific fields ✔ If it’s an object → ask for its fields ✔ Use arguments to fetch precise data ✔ Queries can be nested (real power 💪) ⚡ Why Queries are Powerful ✔ Single request → multiple data ✔ Reduces network calls ✔ Cleaner & predictable responses ✔ Better performance for frontend apps 📘 Follow for more frontend insights 🚀 #GraphQL #Frontend #FrontendDevelopment #WebDevelopment #JavaScript #ReactJS #APIs #TechLearning #LearnInPublic #DevCommunity #FrontendEngineer #100DaysOfCode
To view or add a comment, sign in
-
-
GraphQL Series — Day 5 Now that your server is running… let’s define how your API actually works 👇 👉 Define your data structure using typeDefs 👉 Understand how schema shapes your API 👉 Use resolvers to return real data 💡 Think of it like this: typeDefs = blueprint 🧩 resolvers = execution ⚙️ 🔍 What We’re Doing Today 1️⃣ Create schema using typeDefs 2️⃣ Define custom types (Product, User, Order) 3️⃣ Add Query as entry point 4️⃣ Write resolvers to fetch data 🧪 What You’ll See 👉 Schema → defines available data & structure 👉 Resolvers → return actual data 💡 Structure + Logic = Working API ⚠️ Important ✔ Schema defines what you can query ✔ Resolvers define what gets returned ✔ Names must match exactly 🧠 Key Things to Remember ✔ typeDefs = structure of API ✔ Query = entry point ✔ Resolvers connect schema to data ✔ GraphQL flow: Query → Schema → Resolver → Response ⚡ Why This Matters ✔ You now control your API structure ✔ You can define exactly what clients can query ✔ Foundation for real-world GraphQL apps Follow for more frontend insights 📘 #GraphQL #Frontend #FrontendDevelopment #WebDevelopment #JavaScript #ReactJS #APIs #TechLearning #LearnInPublic #DevCommunity #FrontendEngineer #100DaysOfCode
To view or add a comment, sign in
-
-
GraphQL Series — Day 2 Now that we know what GraphQL is… let’s understand the foundation of it — Types 👇 👉 GraphQL is a strongly typed system 👉 Every piece of data has a clearly defined type 👉 This makes APIs predictable and easier to work with 💡 Think of it as a blueprint that defines what data looks like. 🧠 Types in GraphQL 1. Scalars (basic values) ✔ String ✔ Int ✔ Float ✔ Boolean ✔ ID 👉 These are the simplest building blocks. 2. Object Types ✔ Used to define structured data ✔ Similar to objects in JavaScript Example idea: A User type can have name, email, and id 3. Nested Types ✔ Types can reference other types ✔ Helps represent real-world relationships 👉 Example: User → Posts → Comments 4. Non-Null (!) Types ✔ Ensures a field must always have a value ✔ Prevents unexpected null errors 👉 Makes your API more reliable ⚡ Why Types Matter ✔ Clear contract between frontend & backend ✔ Better developer experience ✔ Fewer runtime surprises ✔ Easier to scale APIs Follow for more frontend insights 📘 #GraphQL #Frontend #WebDevelopment #APIs #JavaScript #ReactJS #TechLearning #LearnInPublic #DevCommunity #FrontendEngineer #100DaysOfCode
To view or add a comment, sign in
-
-
“set (𝙧𝙚𝙫𝙖𝙡𝙞𝙙𝙖𝙩𝙚: 𝟲𝟬) and move on” This works in Next.js — until it doesn’t. Now you’re: • serving stale data for up to 60s • re-fetching even when nothing changed • adding load for no reason This is where caching stops being an optimization — and becomes about 𝘄𝗵𝗼 𝗼𝘄𝗻𝘀 𝗳𝗿𝗲𝘀𝗵𝗻𝗲𝘀𝘀. At a high level: 𝗦𝘁𝗮𝘁𝗶𝗰 𝗿𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴 → speed 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗿𝗲𝗻𝗱𝗲𝗿𝗶𝗻𝗴 → freshness 𝗖𝗮𝗰𝗵𝗶𝗻𝗴 + 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 → where we balance the two But this abstraction starts to break down at scale. 👉 𝗧𝗶𝗺𝗲-𝗯𝗮𝘀𝗲𝗱 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 (𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲) periodically refetches data This works well when data becomes stale in predictable intervals Think dashboards, blogs, or analytics snapshots 👉 𝗢𝗻-𝗱𝗲𝗺𝗮𝗻𝗱 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 (𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲𝗧𝗮𝗴, 𝗿𝗲𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗲𝗣𝗮𝘁𝗵) flips the model Don't blindly revalidate — react to change In one of our systems, moving from time-based to event-driven invalidation: • reduced redundant fetches significantly • cache behavior became predictable under load This becomes the default once writes are frequent. 👉 𝗙𝘂𝗹𝗹 𝗥𝗼𝘂𝘁𝗲 𝗖𝗮𝗰𝗵𝗲 𝘃𝘀 𝗗𝗮𝘁𝗮 𝗖𝗮𝗰𝗵𝗲 • Full Route Cache → caches the rendered output • Data Cache → caches the underlying fetch calls That separation is powerful: Don't rebuild the entire page — refresh just the data 🧠 𝗠𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆 𝗦𝘁𝗼𝗽 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗶𝗻 𝘁𝗶𝗺𝗲 → 𝘀𝘁𝗮𝗿𝘁 𝘁𝗵𝗶𝗻𝗸𝗶𝗻𝗴 𝗶𝗻 𝗲𝘃𝗲𝗻𝘁𝘀 Instead of → “𝘳𝘦𝘷𝘢𝘭𝘪𝘥𝘢𝘵𝘦 𝘦𝘷𝘦𝘳𝘺 𝘟 𝘴𝘦𝘤𝘰𝘯𝘥𝘴” Think → “𝘸𝘩𝘢𝘵 𝘦𝘷𝘦𝘯𝘵 𝘴𝘩𝘰𝘶𝘭𝘥 𝘮𝘢𝘬𝘦 𝘵𝘩𝘪𝘴 𝘥𝘢𝘵𝘢 𝘴𝘵𝘢𝘭𝘦?” ❓Interested to hear how this plays out in write-heavy or multi-region setups. #NextJS #Caching #ReactJS #WebDevelopment #FullStack #JavaScript #SoftwareEngineering #SystemDesign #FrontendDevelopment
To view or add a comment, sign in
-
-
If you have UserTable, ProductTable, and OrderTable that differ only in their data type — you've written the same component three times. TypeScript generics can collapse all three into one: // Three copies, three maintenance points const UserTable = ({ users }: { users: User[] }) => ... const ProductTable = ({ products }: { products: Product[] }) => ... const OrderTable = ({ orders }: { orders: Order[] }) => ... // One generic component function Table<T>({ data, columns }: { data: T[]; columns: ColumnDef<T>[]; }) { return data.map((row, i) => ( <Row key={i} row={row} columns={columns} /> )); } Same component, full type inference. TypeScript infers T from the data prop — you never write the type argument explicitly. If a column accessor references a field that doesn't exist on T, TypeScript catches it at compile time, not at runtime. This is how TanStack Table v8 is built — the core Table<TData> type carries the row data type through columns, rows, cells, and sorting logic. Every accessor is type-checked against TData automatically. One syntax note for TSX files: arrow function generics need a trailing comma to avoid JSX parser ambiguity: const Component = <T,>({ data }: { data: T }) => ... When this doesn't apply: • One-off components that won't be reused — generics add cognitive overhead • Components that differ in behavior, not just type — composition handles that better • Teams new to TypeScript generics — make the duplication visible first, then extract Do you have component files that look suspiciously identical except for their prop types? #TypeScript #ReactDevelopment #JavaScript #FrontendEngineering #Generics
To view or add a comment, sign in
-
-
Your FastAPI backend is fast to build. But is it fast to run? Most developers find out the answer at the worst possible moment when real users hit it at the same time. Endpoints slow down. Requests pile up. Users drop off. Not because the code is wrong. Because it is blocking. Here is what blocking actually looks like in production: Your user hits an endpoint. FastAPI calls the database. That query takes 200ms. During those 200ms your server is frozen. Not slow. Frozen. Every other request sits in a queue waiting for that one query to finish. 100 users hit your API at the same time. User 1 gets served. Users 2 to 100 wait in line. That is sync. That is blocking I/O. FastAPI was built to never work that way. With async/await while your database query runs in the background, your server is already picking up the next request. And the next. And the next. 200ms of database wait becomes invisible to every other user. In real backend terms. SYNC — blocks: def get_orders(user_id: int): return db.query(user_id) ASYNC — non blocking: async def get_orders(user_id: int): return await db.query(user_id) Same logic. Same database. Same server. But now 100 users get served in the time it used to take to serve 1. This matters even more when your endpoints call external services. 1. Payment gateway 300ms wait. 2. AI model response 2 to 3 seconds wait. 3. Email service 500ms wait. Sync every user feels every millisecond of every one of those waits. with Async none of them do. FastAPI gives you non-blocking I/O natively. No extra setup. No plugins. No workarounds. Just write async. Add await. Let FastAPI handle the rest. Your backend was already fast to build. Now make it fast to run. Are you using async endpoints in your FastAPI projects? 👇 #FastAPI #Python #BackendDevelopment #AsyncProgramming #SoftwareEngineering #APIDesign #PythonDeveloper #WebDevelopment #TechIn2026 #BuildInPublic
To view or add a comment, sign in
-
-
JavaScript was getting messy. Our team was building a complex data visualization tool, and the codebase was quickly becoming a tangled mess. State management was a nightmare, and we were constantly worried about data integrity. It was painful. Then we decided to go all-in on TypeScript for the core logic. We started modeling our complex data structures using generics and mapped types. It felt like overkill at first, but suddenly, operations started *making sense*. Type safety wasn't just a buzzword; it was a safety net. The result? The component shipped on time. More importantly, we had unprecedented confidence in its correctness. Future updates and feature additions? They're now dramatically faster and far less prone to bugs. It turns out, strong typing isn't just for preventing errors; it's a massive accelerant for development. What's your experience with TypeScript in complex projects? #typescript #frontenddevelopment #javascript #datascience #softwareengineering
To view or add a comment, sign in
-
The Node.js ORM landscape is evolving! 🌐 Dive into our comprehensive analysis comparing Prisma, Drizzle, TypeORM, MikroORM, and Sequelize. Make informed decisions for your projects by understanding the trade-offs in abstraction, performance, and developer experience. Don't miss out on optimizing your development workflow! Read more here: https://lnkd.in/g4zsEu36 #NodeJS #ORM #SoftwareDevelopment #TechTrends
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development