Day 86 of me reading random and basic but important dev topicsss.... After text decoder I read about text encoder..... Yesterday, I read about how TextDecoder turns raw binary bytes into readable strings. But what happens when we need to send a string over a WebSocket, hash it with the Web Crypto API, or write it to a low-level file stream? We have to do the reverse: Convert the String into a Uint8Array of bytes. For this, we use TextEncoder. Unlike TextDecoder which supports various encodings, TextEncoder is strictly standardized to support only UTF-8. This ensures cross-platform consistency for modern web applications. let encoder = new TextEncoder(); let uint8Array = encoder.encode("Hello"); console.log(uint8Array); // Uint8Array(5) [72, 101, 108, 108, 111] 1. The encodeInto() Optimization: The standard .encode(str) method creates a brand new Uint8Array every time it's called. If you are encoding strings inside a tight loop (e.g. a game engine or a high-throughput data parser), allocating new memory repeatedly will trigger the Garbage Collector and cause performance spikes. Instead, use encodeInto(str, destination). This method takes your string and writes the bytes directly into an existing Uint8Array buffer that you have already allocated. let encoder = new TextEncoder(); let preAllocatedBuffer = new Uint8Array(256); // Allocate memory once // Encode directly into the existing buffer let result = encoder.encodeInto("Hello", preAllocatedBuffer); console.log(result.read); // 5 (characters read) console.log(result.written); // 5 (bytes written into the buffer) This zero-allocation technique is a massive win for high-performance applications! Understanding ArrayBuffer, Uint8Array, TextDecoder, and TextEncoder gives you full control over how memory and data flow through your JavaScript applications. Keep Learning!!!! #JavaScript #Nodejs #FrontendDevelopment #WebSockets #WebAssembly #CodingTips
TextEncoder: Converting Strings to Uint8Array for WebSockets and More
More Relevant Posts
-
𝗠𝗲𝗺𝗼𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗻𝗼𝘁 𝗮 "𝗳𝗿𝗲𝗲" 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝘄𝗶𝗻. ⚡ It’s tempting to wrap every calculation in useMemo or every function in useCallback. But in a large-scale React application, this can backfire. 𝗧𝗵𝗲 𝗖𝗼𝘀𝘁 𝗼𝗳 "𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻": Every time you use these hooks, you aren't just saving a calculation. You are: 1. 𝗜𝗻𝗰𝗿𝗲𝗮𝘀𝗶𝗻𝗴 𝗺𝗲𝗺𝗼𝗿𝘆 𝘂𝘀𝗮𝗴𝗲: React must store the previous value and the dependency array in memory. 2. 𝗔𝗱𝗱𝗶𝗻𝗴 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗼𝘃𝗲𝗿𝗵𝗲𝗮𝗱: On every render, React must run a shallow comparison on every dependency. If you are memoizing a simple .filter() on a 50-item list, the "optimization" overhead is often more expensive than the re-calculation itself. 𝗪𝗵𝗲𝗻 𝘁𝗵𝗲 𝘁𝗿𝗮𝗱𝗲-𝗼𝗳𝗳 𝗺𝗮𝗸𝗲𝘀 𝘀𝗲𝗻𝘀𝗲: ✅ 𝗛𝗲𝗮𝘃𝘆 𝗖𝗼𝗺𝗽𝘂𝘁𝗮𝘁𝗶𝗼𝗻: Expensive data processing (e.g., parsing large JSON or complex regex) that actually blocks the main thread. ✅ 𝗥𝗲𝗳𝗲𝗿𝗲𝗻𝘁𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗴𝗿𝗶𝘁𝘆: When passing objects or functions to children wrapped in React.memo. Without it, the child re-renders on every parent update, defeating the purpose of React.memo. ✅ 𝗘𝗳𝗳𝗲𝗰𝘁 𝗦𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆: When the value is a dependency in a useEffect that triggers an API call or a heavy subscription. 𝗧𝗵𝗲 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆: Don't guess—measure. I’ve started using the 𝗥𝗲𝗮𝗰𝘁 𝗣𝗿𝗼𝗳𝗶𝗹𝗲𝗿 to identify "Wasted Renders" before reaching for a hook. Often, the better fix isn't memoization, but 𝘀𝘁𝗮𝘁𝗲 𝗰𝗼𝗹𝗹𝗼𝗰𝗮𝘁𝗶𝗼𝗻 or 𝗺𝗼𝘃𝗶𝗻𝗴 𝘁𝗵𝗲 𝘀𝘁𝗮𝘁𝗲 𝗱𝗼𝘄𝗻 the component tree. 𝗜𝗻 𝘆𝗼𝘂𝗿 𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝘀𝘁𝗮𝗰𝗸, 𝗮𝗿𝗲 𝘆𝗼𝘂 𝘀𝗲𝗲𝗶𝗻𝗴 𝗺𝗼𝗿𝗲 𝗽𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗴𝗮𝗶𝗻𝘀 𝗳𝗿𝗼𝗺 𝗺𝗲𝗺𝗼𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗼𝗿 𝗳𝗿𝗼𝗺 𝗯𝗲𝘁𝘁𝗲𝗿 𝗰𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁 𝗰𝗼𝗺𝗽𝗼𝘀𝗶𝘁𝗶𝗼𝗻? 👇 #ReactJS #WebPerformance #FrontendEngineering #JavaScript #ProgrammingTips #SoftwareDevelopment
To view or add a comment, sign in
-
-
I built a static site generator...that corrupts your content.** Introducing **The Void SSG** — a full-stack blog engine with a Lovecraftian twist. You create sites, write markdown entries, and publish them. But here's the catch: your content gradually degrades with eldritch symbols the more you use it. Mention Cthulhu in a blog post? The system detects it and applies themed corruption. Different readers see different content based on their "sanity threshold." Navigation links get obfuscated or vanish entirely. The build process outputs ANSI-formatted narratives describing what happened to your content — like terminal horror fiction. It's a CMS where the content fights back. A cursed journal that rewrites itself. **The tech stack:** 🔹 Java 21 + Spring Boot 3.2 backend with virtual threads for async builds 🔹 React 19 + TypeScript + Vite frontend 🔹 Three.js WebGL fluid simulation for the UI background 🔹 Spring Shell CLI for terminal-based "rituals" 🔹 MySQL 8 + Flyway for persistence 🔹 Configurable entropy modes: Daily, User-Based, Cryptographic **Key features:** → Regex-based detection for 7 Lovecraftian entities with unique side effects → Viewer-aware navigation that deterministically hides or renames links per visitor → Narrative build logs styled as terminal stories → Full REST API + interactive CLI + React UI This was a fun exercise in combining software engineering with creative writing and worldbuilding. Sometimes the best way to learn a stack is to build something weird with it. Check it out on GitHub 👇 https://lnkd.in/gn-nG5rC #Java #SpringBoot #React #TypeScript #WebDev #SideProject #CreativeCoding #Lovecraft #OpenSource
To view or add a comment, sign in
-
-
🔍 Understanding JSON.parse() — What Works & What Breaks Instantly One of the most common pitfalls in JavaScript is assuming that anything looks like JSON can be parsed. But JSON.parse() is strict — and that’s where many bugs begin. ✅ Valid JSON (Parses Successfully): Objects: {"a":1} Arrays: [1,2,3] Strings: "hello" (must be in double quotes!) Numbers, booleans, and null ❌ Invalid JSON (Fails Immediately): Unquoted strings → hello Unquoted keys → {a:1} Single quotes → {'a':1} Empty string → "" Random text → INVALID 💡 Key Insight: JSON is not JavaScript. It’s a strict data format with clear rules: Keys must be in double quotes Strings must be in double quotes No trailing commas, no shortcuts 🚨 Why this matters: When working with APIs, local storage, or backend data, a small formatting mistake can break your entire app. 👉 Think of JSON.parse() as a strict gatekeeper — if your data doesn’t follow the exact rules, it won’t even let you in. #JavaScript #WebDevelopment #Frontend #Programming #CodingTips #JSON #Developers #reactjs #nodejs
To view or add a comment, sign in
-
-
🚀 JSON vs JavaScript Object: Simple Breakdown! 🚀 Ever wondered what's the real difference? Check this visual! 👇 JavaScript Object 💻: Lives in memory, holds properties + functions (methods). Super flexible for your code! JSON 📄: Just text format for data sharing. No functions, strict rules – perfect for APIs & storage! JSON = Data transport. JS Object = In-app powerhouse! Dev friends, save this for your next API debug! What's your biggest JSON gotcha? Comment below! ⬇️ #WebDevelopment #JavaScript #JSON #Coding #Programming #Frontend #Developer #ReactJS #DevCommunity #LearnToCode #Innovation #Technology
To view or add a comment, sign in
-
-
🚀 Day 20 – Deep vs Shallow Copy in JavaScript Ever changed a copied object… and accidentally modified the original too? 😅 Yeah, that’s the shallow copy trap. Let’s fix that today 👇 🔹 Shallow Copy Copies only the first level 👉 Nested objects still share the same reference 🔹 Deep Copy Creates a fully independent clone 👉 No shared references, no unexpected bugs 💡 Real-world example (Angular devs 👇) When working with forms, APIs, or state (NgRx), a shallow copy can silently mutate your original data — leading to hard-to-debug UI issues. ⚡ Best Ways to Deep Copy ✔️ structuredClone() (modern & recommended) ✔️ JSON.parse(JSON.stringify(obj)) (with limitations) ✔️ _.cloneDeep() (lodash) 🔥 TL;DR Shallow Copy → Shares references Deep Copy → Fully independent Prefer structuredClone() whenever possible 💬 Have you ever faced a bug because of shallow copying? Drop your experience 👇 #JavaScript #Angular #WebDevelopment #Frontend #Programming #100DaysOfCode
To view or add a comment, sign in
-
-
JavaScript type coercion isn't magic. It's a spec. Every unexpected output follows the same rules, in the same order: 1. Identify the operator 2. If an object is involved → call valueOf(), then toString() 3. If operator is + and either side is a string → concatenate 4. If operator is -, *, / → convert both sides to numbers 5. Compute That's it. Five steps. Runs every time. --- Where this breaks production code: [10] + [1] = "101" → Arrays are objects. valueOf() returns the array itself (not a primitive), so toString() runs next. [10].toString() = "10", [1].toString() = "1". Then + sees two strings. Concatenates. [10] - [1] = 9 → - forces ToNumber. [10] → 10, [1] → 1. Subtracts normally. Boolean([]) = true → [] is not in the falsy list. Truthy. "0" == false → true, but Boolean("0") → true → == coerces both sides to number first. Boolean() checks the falsy list directly. input.value, localStorage, URLSearchParams → Always strings. Wrap with Number() or parseInt() before any arithmetic. --- The complete falsy list (memorise this, not the edge cases): false / 0 / -0 / 0n / "" / null / undefined / NaN Eight values. Everything else is truthy. --- Use === by default. Use == only when you explicitly want coercion — and you almost never do. The spec is 20 years old. The bugs are still new because developers skip the fundamentals. #JavaScript #WebDev #Frontend #SoftwareEngineering #Programming
To view or add a comment, sign in
-
-
🧪 Memory Leak Laboratory an open-source tool for JavaScript/TypeScript developers One of the most common reasons a Node.js app slows down over time with no obvious cause is a memory leak hiding inside the code we write every day. A friend and I built js-leak-lab as a hands-on learning tool, not just another article you read and imagine your way through. What you can do in this lab 🔬 • Simulate 20 memory leak patterns unbounded arrays, closures holding references longer than they should, event listeners that never get removed, and more • Toggle leaks on and off instantly and watch the heap spike in real time • Compare Bad Code vs Good Code side-by-side, with both actually runnable • Monitor heap, RSS, and external memory through live gauges and charts updated via WebSocket ⚙️ Stack: Bun + Bun.serve(), Tailwind CSS, Chart.js, Prism.js no frontend build step, open it and it just works 🐳 Docker-ready with memory limit support since this lab simulates real leaks, setting a RAM cap is strongly recommended Built for developers who want to understand how memory leaks actually happen and how to fix them correctly before production figures it out for you. 🔗 GitHub: https://lnkd.in/gRTGUq2C 🌐 Demo: https://lnkd.in/gtjBDp9a #JavaScript #TypeScript #OpenSource #WebDevelopment #NodeJS #MemoryLeak #Bun #DevTools
To view or add a comment, sign in
-
-
Why the idea that var is more performant than let/const still appears — and why it`s misleading From time to time I still see claims that var should be preferred over let and const because it is faster and widely used in popular libraries. This sounds convincing, but in modern JavaScript it is mostly a myth. In real applications, performance differences between var, let, and const are negligible. Modern engines like V8 and SpiderMonkey optimize scope access heavily. In browsers, var can even be slower in some cases, because top-level var becomes a property of the global object, and accessing object properties may cost more than accessing block-scoped variables. Another common argument is that large libraries use var. But people often look at compiled dist bundles instead of source code. Those files are transpiled, minified, and optimized for compatibility. If you check real sources of major projects, you will see the opposite: Angular, React, Vue, Node.js core, and the TypeScript compiler all use const and let in their codebases. var usually appears only after build tools process the code. Style guides also make this clear. Google, Mozilla, and Airbnb recommend using const by default, let when needed, and avoiding var completely because of function scope, hoisting quirks, and risk of accidental globals. The bigger lesson here is not about var itself. In engineering, it`s easy to believe statements like "this is faster" "libraries do it this way" "this is how professionals write code" But without checking the source and measuring real performance, those claims can be wrong. Today we have TypeScript, Babel, SWC, and other tools that can generate whatever output is needed. Developers should write clear and safe code using const and let, and let the compiler handle low-level optimizations. #JavaScript #TypeScript #WebDevelopment #Frontend #SoftwareEngineering #CleanCode #CodingStandards #DeveloperExperience #Programming #FrontendArchitecture
To view or add a comment, sign in
-
-
I got confused by JavaScript data types in my early days. Spent hours debugging a bug that made no sense. Turned out I was comparing two objects and expecting them to be equal. They weren't. Same data. Same structure. But JavaScript said — not equal. That day I learned the difference between primitive and reference types. And honestly? It changed the way I write code. Here is what I wish someone had told me earlier: Numbers, strings, booleans — these are primitives. They store the actual value. Copy them, and you get a fresh copy. Objects and arrays — these are reference types. They store a pointer to memory. Copy them, and both variables point to the same place. That is the bug I had. I was comparing pointers, not values. 13 years later I still think about this when reviewing code. Swipe through the carousel. I made it as simple as possible. If it helps even one developer avoid that same confusion, the job is done. What was your most confusing JavaScript moment? Drop it below 👇 #JavaScript #Frontend #WebDevelopment #ReactJS #NextJS #JS #Programming #LearningInPublic #FrontendDeveloper #WebDev #100DaysOfCode #CodeNewbie #TechTips #SoftwareEngineering
To view or add a comment, sign in
-
TypeScript's most powerful keyword that nobody understands. infer You've seen it in utility types. You've copy-pasted code that uses it. You've never fully understood what it does. It's simpler than you think: infer captures a type into a variable. Read it like this: type ArrayItem<T> = T extends (infer U)[] ? U : never "If T is an array of something, capture that something as U, then return U." That's it. Pattern matching for types. How to read any infer type: 1. Look at extends → that's the pattern 2. Find infer X → that's what gets captured 3. Look after ? → that's what you get back Real use cases: → Get item type from an array type Item = ArrayItem<User[]> // User → Get resolved value from Promise type Data = Unpromise<Promise<string>> // string → Get props from a React component type Props = ComponentProps<typeof Button> → Get return type from function type Result = ReturnType<typeof fetchUser> That last one? ReturnType is just infer under the hood: type ReturnType<T> = T extends (...args: any[]) => infer R ? R : never You've been using infer this whole time. Now you know how to write your own. #typescript #javascript #frontend #webdev #programming #webdevelopment #react #types #cleancode #devtips
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development