Today, JavaScript left the browser and talked to the world. For the past few weeks, I've been focused on how JavaScript manages data internally — execution context, closures, state-driven rendering. Today I learned where that data actually comes from. The full pipeline, in plain terms: When a browser requests a website, it first asks DNS to translate a domain name into an IP address. The browser then sends an HTTP request to that address. The server responds. That response travels as a plain string — not an object. This is where JavaScript takes over. fetch() sends the HTTP request and returns a Promise — a placeholder for data that hasn't arrived yet. await pauses execution until it does. .json() converts the string into a usable JavaScript object. From that point, it's just an array — the same render logic I already wrote for my job tracker works without modification. I didn't learn new rendering patterns today. I learned that the array I was hardcoding can come from a server instead. The renderUI() function of now doesn't care where the data came from — filtered, fetched, or local. It just renders what it receives. That's the payoff of state-driven architecture. The data source changes. The render logic doesn't. Still early. Still building from the ground up. #JavaScript #WebDevelopment #HTTP #APIs #LearningInPublic #BackendFirst
JavaScript fetches data from server, rendering logic remains unchanged
More Relevant Posts
-
𝗧𝗵𝗲 𝗘𝘃𝗲𝗻𝘁 𝗟𝗼𝗼𝗽: 𝗠𝗶𝗰𝗿𝗼𝘁𝗮𝘀𝗸𝘀 𝗮𝗻𝗱 𝗠𝗮𝗰𝗿𝗼𝘁𝗮𝘀𝗸𝘀 You write JavaScript code that handles thousands of requests per second. But do you really understand the event loop? Let's break it down: - Your code runs on a call stack, one function at a time. - When you call setTimeout or fetch, you hand the task to an assistant. - The assistant places a note on a task queue when it's done. - The event loop checks the stack, then the task queue. But here's the thing: there are two queues - macrotasks and microtasks. - Macrotasks include setTimeout, setInterval, and I/O events. - Microtasks include Promise callbacks and queueMicrotask. The event loop has a rule: after every macrotask, it empties the microtask queue. This changes how you write your code. For example: - console.log('1') - setTimeout(() => console.log('2'), 0) - Promise.resolve().then(() => console.log('3')) - console.log('4') The output will be 1, 4, 3, 2. Microtasks always run before the next macrotask. To write better code: - Use setTimeout when you want to yield to the UI. - Use queueMicrotask or Promise when you need something to happen immediately. Understanding the event loop will save you from: - Blocking the event loop with synchronous loops - Mis-ordering critical database updates - Building unreliable real-time systems You place the marbles, the engine moves them. It's your understanding of the track that determines the performance. Draw the queues, ask yourself: Is this a macrotask? A microtask? Respect the choreography, and the engine will reward you with silky-smooth performance. Source: https://lnkd.in/g--B86YF
To view or add a comment, sign in
-
🚀 map(), filter(), reduce() in JavaScript (with Definitions + Examples) (Part:3) These 3 array methods are must-know if you want to get strong in JavaScript Example array: let arr = [1, 2, 3, 4, 5]; 1️⃣ map() → Transform data Definition: map() creates a new array by applying a function to each element. let result = arr.map((num) => num * 2); console.log(result); // [2, 4, 6, 8, 10] ✔️ Use when you want to modify every element 2️⃣ filter() → Select data Definition: filter() creates a new array with elements that satisfy a condition. let result = arr.filter((num) => num > 2); console.log(result); //[3, 4, 5] ✔️ Use when you want to pick specific values 3️⃣ reduce() → Combine data Definition: reduce() reduces the array to a single value by applying a function. let result = arr.reduce((acc, num) => acc + num, 0); console.log(result); // 15 ✔️ Use for sum, totals, complex calculations Key Differences: >> map → transforms each element >> filter → selects elements >> reduce → combines into one value 🎯 Important Note: >>> These methods do not change the original array (they return a new one) # forEach() vs map() : 1️⃣ forEach() >> Executes a function for each element >> Does NOT return anything let result = arr.forEach((num) => { return num * 2; }); console.log(result); // undefined 2️⃣ map() >> Transforms each element >> Returns a NEW array let result = arr.map((num) => num * 2); console.log(result); // [2, 4, 6] #JavaScript #Frontend #WebDevelopment #Coding #LearnInPublic
To view or add a comment, sign in
-
6 days ago I made a post on: 📌 "Something i figured in JavaScript today that A.I code misinterprets." I am about to share that now, pay close attention. As a developer using JavaScript, this is in connecting with the scopes of JavaScript. The Scope of JavaScript refers to the visibility of variable and functions within a program of codes. Which are: 1. Global scope: this variable is visible anywhere in the javascript program. 2. Function scope: this is a variable created when a function is declared and it's variable and functions are only visible withing that function. A sample of it is: Function funName(){ var fs = "..." alert (fs); console.log(fs); } funName(); Now looking at this, A.I codes misinterprets the function scopes and genrate codes that carries just global scopes or even most times Interchange function scopes with global scopes when giving a variable function. 📌 The risk of this common error in our program will not appear at the beginning of the project but during debugging and code maintenance. Wonder why JavaScript bugs gives you sleepless nights? This is one of the main reasons. This is a call for developers and vibe coders to learn the intimate differences between GLOBAL SCOPE VARIABLES and FUNCION SCOPE VARIABLES. You A.I JavaScript code can cause you harm later if you do not learn this earlier. 📌 A.I. frequently misunderstands Hoisting and the Temporal Dead Zone (TDZ) when creating nested functions. It often defaults to legacy var logic within closure loops (because the bulk of the training data still uses it) rather than modern let/const for block scoping. It optimizes for visual syntax, not runtime safety. Automation without technical intuition creates technical debt. Want more daily strategy from the cutting edge of web infrastructure? connect with snow works #WorksTechnology #JavaScriptMastery #CodingArchitecture #AIPerformance #TechnicalIntuition #WebArchitecture #SoftwareDesign #WebDevStrategy
To view or add a comment, sign in
-
-
Don't do JavaScript. I mean it. The browser is a highly optimized engine written in C++, Rust, and C. It parses HTML. It diffs the DOM. It handles routing, focus, accessibility, scroll, history. It compresses repeated patterns with Brotli by factors of thousands. It is insanely good at this. And we keep replacing it. With JavaScript. ◆ How we got here Servers used to be slow. Threading was expensive. So we pushed logic to the client. SPAs made sense — in 2010. That constraint is gone. The complexity stayed. And quietly, over years, frameworks started re-implementing the browser itself. Virtual DOM. Client router. Hydration. State machines. JSON APIs just to feed them. We're not solving data problems anymore. We're solving framework problems. ◆ The forgotten principle: Progressive Enhancement Start with HTML that works. Always. No JavaScript required. Then layer interactivity on top — not as a dependency, but as an improvement. A page that works without JS and flies with it. That's not a compromise. That's good engineering. ◆ What the fast path actually looks like → Render HTML on the server — fully functional, accessible, indexable → Stream updates over a single SSE connection → Let the browser do diffing, layout, rendering → Repeated HTML over SSE + Brotli is often cheaper than your clever JSON diff → Add ~10 KB of Datastar for local state and reactivity on top → No build step — easier DevOps, faster iteration No virtual DOM. No hydration step. No client router. The server stays the source of truth. JS enhances — it doesn't own. ◆ Measure, don't vibe If one approach is 30–40x faster, uses less memory, less bandwidth, and less code — be willing to throw the old one away. The web was never slow. We just forgot how to use it. Delaney Gillilan explains this better than anyone — link in comments 👇 #JavaScript #ProgressiveEnhancement #WebDevelopment #Datastar #Hypermedia #SSE #CraftCMS
To view or add a comment, sign in
-
💡 JavaScript Tip: Start using the .at(-1) today! If you're still accessing the last element of an array like this: arr[arr.length - 1] There’s a cleaner and more readable way 👇 arr.at(-1) 🔹 Why use .at()? ✅ Cleaner syntax ✅ Easier to read ✅ Supports negative indexing 🔹 Examples const nums = [10, 20, 30, 40]; nums.at(0); // 10 nums.at(2); // 30 nums.at(-1); // 40 (last element) nums.at(-2); // 30 🔹 When to use it? Accessing last or second-last elements Writing cleaner, more modern JavaScript Avoiding repetitive .length calculations ⚠️ Note .at() works in modern JavaScript (ES2022+), so ensure your environment supports it. Small improvements like this can make your code more readable and elegant ✨ Are you using .at() already? #JavaScript #CleanCode #WebDevelopment #FrontendDevelopment #ProgrammingTips #DevCommunity #SoftwareEngineering
To view or add a comment, sign in
-
-
Ever copied an object in JavaScript… and later wondered why changing one thing broke something else? I used the spread operator, Object.assign—everything looked right. The object was “copied,” features worked, and I moved on. Until one small change inside a nested object started affecting the original data. That’s when I realized copying in JavaScript isn’t as straightforward as it looks. I used to think: “If I copy an object, I get a completely separate version.” Sounds reasonable—but it misses an important detail. So I slowed down and explored what was actually happening. References. Memory locations. Nested structures. I built tiny examples, tweaked values, compared outputs—sometimes it worked, sometimes it didn’t… until patterns started to show up. And then it made sense. The issue wasn’t the syntax—it was understanding what gets copied and what still points to the same place in memory. That shift changed a lot: • I stopped assuming copies were independent • Debugging weird state changes became much easier • Spread vs structuredClone finally had a clear difference • Nested objects stopped feeling unpredictable Most importantly: 👉 A shallow copy duplicates only the top level 👉 Nested objects still share the same reference 👉 A deep copy creates a fully independent structure Now when something changes unexpectedly, I don’t guess—I check how the data was copied. Still learning, but this one concept made JavaScript feel a lot more predictable. What’s one JS concept that took you time to truly understand? #JavaScript #WebDevelopment #Frontend #LearningInPublic #JSConcepts #Debugging
To view or add a comment, sign in
-
-
💡 Pass by Value vs Pass by Reference in JavaScript (Simple Explanation) If you're learning JavaScript, understanding how data is passed is crucial 👇 🔹 Pass by Value (Primitives) When you assign or pass a primitive type (number, string, boolean, null, undefined, symbol, bigint), JavaScript creates a copy. let a = 10; let b = a; b = 20; console.log(a); // 10 console.log(b); // 20 👉 Changing b does NOT affect a because it's a copy. 🔹 Pass by Reference (Objects) When you work with objects, arrays, functions or date objects, JavaScript passes a reference (memory address). let obj1 = { name: "Ali" }; let obj2 = obj1; obj2.name = "Ahmed"; console.log(obj1.name); // Ahmed console.log(obj2.name); // Ahmed 👉 Changing obj2 ALSO affects obj1 because both point to the same object. 🔥 Key Takeaway Primitives → 📦 Copy (Independent) Objects → 🔗 Reference (Shared) 💭 Pro Tip To avoid accidental changes in objects, use: Spread operator {...obj} Object.assign() Understanding this concept can save you from hidden bugs in real-world applications 🚀 #JavaScript #WebDevelopment #Frontend #Programming #CodingTips
To view or add a comment, sign in
-
While working with the DOM in JavaScript, I discovered something that confuses many developers (including me earlier). NodeList vs HTMLCollection Both look similar, but they behave differently — and this can affect your code. Here’s the simple explanation 👇 Example 1: const items = document.querySelectorAll("li"); This returns a NodeList Example 2: const items = document.getElementsByTagName("li"); This returns an HTMLCollection Now the important difference ⚡ If you add a new element to the DOM: NodeList → ❌ Does NOT update automatically HTMLCollection → ✅ Updates automatically (Live) Another difference: NodeList supports forEach() items.forEach(item => console.log(item)); HTMLCollection does not support it directly: Array.from(items).forEach(item => console.log(item)); __________________________________________________________________________________ ⚡ Quick Way to Remember NodeList → querySelectorAll() → Static list HTMLCollection → getElementsBy...() → Live list __________________________________________________________________________________ This is a very common frontend interview question and also important when working with DOM manipulation. Have you ever faced a bug because of this difference? 👇 Chai Aur Code Hitesh Choudhary Piyush Garg #Web Dev Cohort 2026 #javascript #frontenddevelopment #webdevelopment #dom #coding #learninpublic
To view or add a comment, sign in
-
-
🚀 JavaScript map, filter & reduce — From Usage to Internals Instead of just using array methods, explored how they work internally by implementing polyfills. This made their behavior much more intuitive 👇 🧠 Core Methods • map() → transforms each element [1,2,3].map(x => x * 2) // [2,4,6] • filter() → selects elements based on condition [1,2,3,4].filter(x => x % 2 === 0) // [2,4] • reduce() → accumulates into a single value [1,2,3].reduce((acc, curr) => acc + curr, 0) // 6 ⚙️ What Changed When I Built Polyfills • Understood iteration control step-by-step • Saw how callbacks are executed internally • Realized how accumulator flows in reduce() • Gained clarity on functional composition 💡 Mental Model • map → transform • filter → select • reduce → combine 🎯 Takeaway: Using methods is easy. Understanding their internals makes your code intentional and expressive. Building deeper control over JavaScript’s functional patterns. 💪 #JavaScript #FunctionalProgramming #FrontendDeveloper #WebDevelopment #MERNStack #SoftwareEngineering “JavaScript Array Methods – Map vs Filter vs Reduce”
To view or add a comment, sign in
-
-
The hidden cost of the Spread Operator in JS 💡 We often reach for the spread operator [...] because it’s elegant, declarative, and keeps our data immutable. It’s a staple of modern JavaScript for a reason. However, there’s a specific pattern where this "clean" syntax can unintentionally impact performance: using spread inside a loop. What’s happening under the hood? 🔍 When we write a loop like this: for (let item of data) { result = [...result, item]; } It looks like a simple addition. But computationally, we are: 1. Allocating a brand-new array in memory. 2. Copying every existing element from the old array into the new one. 3. Repeating this for every single iteration. If you have a dataset of 10,000 items, you aren't just performing 10,000 additions. You’re triggering millions of internal copy operations (O(n^2) time complexity). This is often why a UI might feel "heavy" or "janky" as the garbage collector tries to keep up with all those discarded arrays. The "Photocopier" Perspective 📑 Think of it like copying a 100-page book. Using spread in a loop is like: 🔸Copying page 1. 🔸Then copying pages 1 and 2 together. 🔸Then copying pages 1, 2, and 3 together... By the time you reach page 100, you've done a mountain of unnecessary work! A Snappier Approach 🛠️ If you’re working with large datasets, consider these alternatives to keep your app responsive: 🔹Standard .push(): A simple, O(n) operation that modifies the array in place. 🔹.reduce() with a mutable accumulator: Functional style without the memory tax. 🔹Array.from() or .map(): Usually the most idiomatic way to transform data. I’m curious—how do you balance "clean" code vs. raw performance? Do you stick to strict immutability for the sake of readability, or do you opt for manual optimizations when the data starts to scale? Let’s talk in the comments! 👇 #JavaScript #WebDev #CodingTips #SoftwareEngineering #Performance #WebPerformance #Frontend #Programming #CleanCode #SoftwareDevelopment #WebDesign
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development