When working with React, one of the most important ideas to understand is immutability. It affects how state is managed and how changes are detected within an application. In React, immutability is a core principle that directly influences how state changes are detected and how the user interface is updated. Immutability means you do not change existing data. Instead, you create new data when something needs to be updated. Why does React care about this? React decides whether to update the UI by checking if data has changed. It does not deeply inspect every value inside an object. It only checks if the reference has changed. In simple terms, React asks: “Is this the same object, or a new one?” If it is the same object, React assumes nothing changed. If it is a new object, React updates the UI. Here is where problems begin if you mutate data. Example of mutation (not recommended): const state = { count: 0 }; state.count = 1; setState(state); In this case, the object is modified directly. The reference remains the same, so React may not detect the change. Now the correct approach using immutability: const state = { count: 0 }; const newState = { ...state, count: 1 }; setState(newState); Here, a new object is created. Even though most values are the same, the reference is different. React sees this and updates the UI correctly. #react #web_devlopment
React Immutability: Understanding State Changes
More Relevant Posts
-
⚛️ React feels “fast”… but why? It’s not magic. It’s the Diffing Algorithm (Reconciliation) working behind the scenes. Let’s break it down so you actually understand what React is doing internally 👇 🧠 𝗧𝗵𝗲 𝗖𝗼𝗿𝗲 𝗜𝗱𝗲𝗮 Every time state/props change, React: 1. Creates a new Virtual DOM 2. Compares it with the previous Virtual DOM 3. Updates ONLY the changed parts in the real DOM 👉 This comparison = Diffing ⚙️ 𝗕𝘂𝘁 𝗵𝗼𝘄 𝗱𝗼𝗲𝘀 𝗥𝗲𝗮𝗰𝘁 𝗱𝗶𝗳𝗳 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁𝗹𝘆? React doesn’t compare everything blindly (that would be slow ❌) Instead, it follows 2 smart assumptions: 1️⃣ 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝘁𝘆𝗽𝗲 = 𝗥𝗲𝗽𝗹𝗮𝗰𝗲 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴 <𝘥𝘪𝘷>𝘏𝘦𝘭𝘭𝘰</𝘥𝘪𝘷> ➡️ becomes <𝘴𝘱𝘢𝘯>𝘏𝘦𝘭𝘭𝘰</𝘴𝘱𝘢𝘯> React says: “Type changed? Cool, destroy old node, create new one” No deep comparison. 2️⃣ 𝗦𝗮𝗺𝗲 𝘁𝘆𝗽𝗲 = 𝗖𝗼𝗺𝗽𝗮𝗿𝗲 𝗽𝗿𝗼𝗽𝘀 <𝘥𝘪𝘷 𝘤𝘭𝘢𝘴𝘴="𝘳𝘦𝘥">𝘏𝘦𝘭𝘭𝘰</𝘥𝘪𝘷> ➡️ becomes <𝘥𝘪𝘷 𝘤𝘭𝘢𝘴𝘴="𝘣𝘭𝘶𝘦">𝘏𝘦𝘭𝘭𝘰</𝘥𝘪𝘷> React keeps the same DOM node 👉 Only updates class from red → blue 3️⃣ 𝗖𝗵𝗶𝗹𝗱𝗿𝗲𝗻 𝗱𝗶𝗳𝗳𝗶𝗻𝗴 (𝗪𝗵𝗲𝗿𝗲 𝘁𝗵𝗶𝗻𝗴𝘀 𝗴𝗲𝘁 𝗶𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗶𝗻𝗴 👀) Let’s say: <𝘶𝘭> <𝘭𝘪>𝘈</𝘭𝘪> <𝘭𝘪>𝘉</𝘭𝘪> </𝘶𝘭> ➡️ becomes <𝘶𝘭> <𝘭𝘪>𝘉</𝘭𝘪> <𝘭𝘪>𝘈</𝘭𝘪> </𝘶𝘭> Without keys, React compares index by index: A → B (change) B → A (change) 👉 Result: unnecessary updates ❌ 🔥 Now with keys (real optimization) <𝘶𝘭> <𝘭𝘪 𝘬𝘦𝘺="𝘈">𝘈</𝘭𝘪> <𝘭𝘪 𝘬𝘦𝘺="𝘉">𝘉</𝘭𝘪> </𝘶𝘭> ➡️ becomes <𝘶𝘭> <𝘭𝘪 𝘬𝘦𝘺="𝘉">𝘉</𝘭𝘪> <𝘭𝘪 𝘬𝘦𝘺="𝘈">𝘈</𝘭𝘪> </𝘶𝘭> React uses keys like identity: 👉 “Oh, B moved… A moved” ✅ Reorders instead of re-creating 🚀 Much faster ⚡ Under the hood (what actually happens) • React builds a tree of elements (Fiber tree) • Each update creates a new tree • It walks both trees node by node • Marks changes (called “effects”) • Then applies minimal updates to real DOM 💡 Real-world mental model Think of it like: Old UI → Screenshot 📸 New UI → Screenshot 📸 React = “Spot the difference” game But optimized using rules + keys ⚠️ Common mistakes developers make ❌ Not using keys → causes re-renders ❌ Using index as key → breaks reordering ❌ Changing component type unnecessarily 🚀 Pro Insight React’s diffing is NOT perfect It’s optimized for speed over accuracy 👉 That’s why keys exist — to help React make better decisions Once you understand this, performance optimization in React becomes way easier. Comment “FIBER” if you want that breakdown 👇 #React #ReactDiff #DAY113
To view or add a comment, sign in
-
-
🚀 𝗜𝗳 𝘆𝗼𝘂𝗿 𝘂𝘀𝗲𝗦𝘁𝗮𝘁𝗲 𝘀𝘁𝗮𝗿𝘁𝘀 𝗹𝗼𝗼𝗸𝗶𝗻𝗴 𝗺𝗲𝘀𝘀𝘆… 𝗶𝘁’𝘀 𝘁𝗶𝗺𝗲 𝗳𝗼𝗿 𝘂𝘀𝗲𝗥𝗲𝗱𝘂𝗰𝗲𝗿. Too many states? Too many handlers? Too many setSomething(...) calls? 👉 That’s your signal to switch to useReducer. 🔍 𝗪𝗵𝗮𝘁 𝗶𝘀 𝘂𝘀𝗲𝗥𝗲𝗱𝘂𝗰𝗲𝗿? useReducer is a React hook used for 𝗺𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝗰𝗼𝗺𝗽𝗹𝗲𝘅 𝘀𝘁𝗮𝘁𝗲 𝗹𝗼𝗴𝗶𝗰 in a clean and predictable way. Instead of updating state directly, you: 👉 dispatch actions 👉 let a reducer function decide how state changes 🔹 𝗕𝗮𝘀𝗶𝗰 𝗘𝘅𝗮𝗺𝗽𝗹𝗲 𝑖𝑚𝑝𝑜𝑟𝑡 { 𝑢𝑠𝑒𝑅𝑒𝑑𝑢𝑐𝑒𝑟 } 𝑓𝑟𝑜𝑚 "𝑟𝑒𝑎𝑐𝑡"; 𝑐𝑜𝑛𝑠𝑡 𝑖𝑛𝑖𝑡𝑖𝑎𝑙𝑆𝑡𝑎𝑡𝑒 = { 𝑐𝑜𝑢𝑛𝑡: 0 }; 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑟𝑒𝑑𝑢𝑐𝑒𝑟(𝑠𝑡𝑎𝑡𝑒, 𝑎𝑐𝑡𝑖𝑜𝑛) { 𝑠𝑤𝑖𝑡𝑐ℎ (𝑎𝑐𝑡𝑖𝑜𝑛.𝑡𝑦𝑝𝑒) { 𝑐𝑎𝑠𝑒 "𝑖𝑛𝑐𝑟𝑒𝑚𝑒𝑛𝑡": 𝑟𝑒𝑡𝑢𝑟𝑛 { 𝑐𝑜𝑢𝑛𝑡: 𝑠𝑡𝑎𝑡𝑒.𝑐𝑜𝑢𝑛𝑡 + 1 }; 𝑐𝑎𝑠𝑒 "𝑑𝑒𝑐𝑟𝑒𝑚𝑒𝑛𝑡": 𝑟𝑒𝑡𝑢𝑟𝑛 { 𝑐𝑜𝑢𝑛𝑡: 𝑠𝑡𝑎𝑡𝑒.𝑐𝑜𝑢𝑛𝑡 - 1 }; 𝑑𝑒𝑓𝑎𝑢𝑙𝑡: 𝑟𝑒𝑡𝑢𝑟𝑛 𝑠𝑡𝑎𝑡𝑒; } } 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝐶𝑜𝑢𝑛𝑡𝑒𝑟() { 𝑐𝑜𝑛𝑠𝑡 [𝑠𝑡𝑎𝑡𝑒, 𝑑𝑖𝑠𝑝𝑎𝑡𝑐ℎ] = 𝑢𝑠𝑒𝑅𝑒𝑑𝑢𝑐𝑒𝑟(𝑟𝑒𝑑𝑢𝑐𝑒𝑟, 𝑖𝑛𝑖𝑡𝑖𝑎𝑙𝑆𝑡𝑎𝑡𝑒); 𝑟𝑒𝑡𝑢𝑟𝑛 ( <> <𝑝>{𝑠𝑡𝑎𝑡𝑒.𝑐𝑜𝑢𝑛𝑡}</𝑝> <𝑏𝑢𝑡𝑡𝑜𝑛 𝑜𝑛𝐶𝑙𝑖𝑐𝑘={() => 𝑑𝑖𝑠𝑝𝑎𝑡𝑐ℎ({ 𝑡𝑦𝑝𝑒: "𝑖𝑛𝑐𝑟𝑒𝑚𝑒𝑛𝑡" })}>+</𝑏𝑢𝑡𝑡𝑜𝑛> <𝑏𝑢𝑡𝑡𝑜𝑛 𝑜𝑛𝐶𝑙𝑖𝑐𝑘={() => 𝑑𝑖𝑠𝑝𝑎𝑡𝑐ℎ({ 𝑡𝑦𝑝𝑒: "𝑑𝑒𝑐𝑟𝑒𝑚𝑒𝑛𝑡" })}>-</𝑏𝑢𝑡𝑡𝑜𝑛> </> ); } 🔥 𝗪𝗵𝘆 𝘂𝘀𝗲 𝘂𝘀𝗲𝗥𝗲𝗱𝘂𝗰𝗲𝗿? ◦ Centralized state logic ◦ Centralized state logic ◦ Cleaner code for complex updates ◦ Predictable state transitions ◦ Scales better than multiple useState calls ⚔️ 𝘂𝘀𝗲𝗦𝘁𝗮𝘁𝗲 𝘃𝘀 𝘂𝘀𝗲𝗥𝗲𝗱𝘂𝗰𝗲𝗿 ❌ 𝘂𝘀𝗲𝗦𝘁𝗮𝘁𝗲 ◦ Best for simple state ◦ Gets messy with multiple related states ✅ 𝘂𝘀𝗲𝗥𝗲𝗱𝘂𝗰𝗲𝗿 ◦ Best for complex state logic ◦ Handles multiple actions cleanly useState → simple updates useReducer → structured state management 🔹 𝗪𝗵𝗲𝗻 𝗦𝗵𝗼𝘂𝗹𝗱 𝗬𝗼𝘂 𝗨𝘀𝗲 𝗜𝘁? 👉 Multiple related state values 👉 Complex update logic 👉 State depends on previous state 👉 You want Redux-like structure (without Redux) 🔑 𝗙𝗶𝗻𝗮𝗹 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆 When your state logic starts feeling chaotic… useReducer brings structure and clarity. 💡 Part of my #FrontendRevisionMarathon — breaking down React concepts daily 🚀 🚀 Follow Shubham Kumar Raj for more such content. Have you used useReducer in real projects? Or still sticking with useState? 👇 #React #Frontend #WebDevelopment #JavaScript #ReactJS #FrontendRevisionMarathon #Performance #FrontendRevisionMarathon #frontenddeveloper #codinginterview #programming #learnjavascript #interviewprep #CareerGrowth #SowftwareEngineering #Hiring #OpenToWork #Post #100DaysOfCode
To view or add a comment, sign in
-
-
How to fix async bugs in React forms (and what each part actually does) If your flow is doing weird things like: – wrong data showing – users submitting too early – API results overwriting each other You don’t need more logic… you need clear responsibility per layer. Here’s the practical breakdown: 1. useEffect → controls WHEN data is fetched This is your trigger. It listens to one thing only (in our case: selected address). So anytime the user changes address, it: validates the data decides if a request should happen fires the API call Not on button click. Not randomly. Only when the dependency changes. 2. Guard clauses → prevent useless API calls Before calling anything, check: do we even have valid city/state? If not: skip the request set a “not valid” state immediately This saves network calls and avoids predictable failures. 3. Cancel logic → fixes race conditions Users move fast. They can: pick Address A quickly switch to Address B Without cancellation: A might finish last and overwrite B (wrong data bug) So you cancel stale requests to ensure: only the latest user action wins 4. State machine → controls WHAT the UI should do Instead of guessing, define clear states: idle | loading | covered | not_covered | error Now: UI knows what to show button knows when to enable errors are intentional, not random 5. Submit gating → enforces correctness The user should only proceed when the system is valid. Not “it might work” But “we KNOW it’s valid” So: only allow submit when state === covered Most frontend bugs in API flows come from mixing these responsibilities. Once you separate: WHEN to fetch (useEffect) IF to fetch (guards) WHAT wins (cancellation) WHAT UI shows (state machine) Everything becomes predictable. That’s a wrap.
To view or add a comment, sign in
-
-
Fetching data is easy. Handling it like a pro is the hard part. 🔄📡 In the early days of React, we all got used to the "useEffect and Fetch" pattern. But as applications grow, simple fetching isn't enough. Modern users expect zero-latency, instant feedback, and seamless synchronization. Whether I'm using Next.js Server Actions or React Query, these are the 3 principles I follow to manage data like a Senior Developer: 1️⃣ Loading States & Skeletons: Never leave your user staring at a blank screen. I use Suspense and Skeleton Screens to provide immediate visual feedback, making the app feel faster even while the data is still traveling. 2️⃣ Caching & Revalidation: Don't waste your user's data or your server's resources. Implementing smart caching (like stale-while-revalidate) ensures that your UI is updated instantly with "stale" data while the "fresh" data fetches in the background. 3️⃣ Optimistic UI Updates: Why wait for the server to say "Success" before updating a Like button or a Todo item? By using Optimistic Updates, we update the UI immediately and roll back only if the request fails. This creates a "light-speed" user experience. The best apps don't just display data—they manage the flow of data effortlessly. What’s your go-to tool for data fetching in 2026? Are you team React Query, or are you moving everything to Next.js Server Actions? Let’s debate! 👇 #ReactJS #NextJS #WebDev #DataFetching #JavaScript #FrontendArchitecture #CodingTips
To view or add a comment, sign in
-
-
I used to think React was just "smart about re-renders." Then I actually dug into how reconciliation works. And I realized I had been writing components that quietly fought the algorithm for years. Here's what's actually happening under the hood — and why it matters for your app's performance. — When state changes, React doesn't touch the real DOM immediately. It builds a new Virtual DOM tree, compares it to the previous one, and figures out the minimum set of changes needed. That comparison process is reconciliation. The naive version of this problem is O(n³). For 1000 elements — that's a billion comparisons. Completely unusable. So React cheats. In a smart way. It uses a heuristic O(n) algorithm built on two assumptions: → Elements of different types always produce different trees → Keys tell React which elements are stable across renders Simple rules. Massive performance gain. — Here's where it gets interesting — and where most bugs actually come from. Rule 1 in practice: If you swap a <div> for a <section>, React tears down the entire subtree and rebuilds from scratch. Every child component unmounts. All local state is lost. This isn't a bug. It's the algorithm doing exactly what you told it to do. I've seen this cause subtle bugs in forms — a wrapper element change during a conditional render, and suddenly input state resets mid-user interaction. — Rule 2 in practice — the key trap: Using array index as a key is one of the most common mistakes I've reviewed in code. If you have a list and insert an item in the middle, every index below it shifts. React sees completely new keys, throws away the existing nodes, and rebuilds them. What looked like a simple insert becomes a full list re-render. Use stable, unique IDs. Always. — Fiber changed everything in React 16. Before Fiber, reconciliation was synchronous — once started, it couldn't be interrupted. A heavy render could block the main thread and freeze the UI. Fiber broke rendering into small units of work. React can now pause, prioritize, and resume rendering. That's what powers Concurrent Mode, Suspense, and transitions. The algorithm didn't change. The scheduler around it did. — Practical things I now do differently because of this: → Never create component definitions inside render — new reference = React thinks it's a different type = full unmount every render → Keys on lists always come from data, never from index → Wrap stable subtrees in React.memo when the parent re-renders frequently → Use the Profiler in DevTools to actually see which reconciliation decisions are expensive — Reconciliation is one of those things that's easy to ignore until performance starts hurting. But once you understand the two rules React operates on, a lot of "React behaves weirdly" moments suddenly make complete sense. What's the most unexpected reconciliation bug you've run into? #react #frontend #javascript #webdev #reactjs #frontenddevelopment #softwaredevelopment
To view or add a comment, sign in
-
-
TypeScript Generics with Real API Calls Generics really shine when handling dynamic API responses — one function, multiple data shapes, full type safety. Example: Generic API Fetcher type ApiResponse<T> = { data: T; status: number; }; // Generic fetch function async function fetchData<T>(url: string): Promise<ApiResponse<T>> { const res = await fetch(url); const data = await res.json(); return { data, status: res.status, }; } Usage with Different APIs type User = { id: number; name: string; }; type Post = { id: number; title: string; }; // Users API const userRes = await fetchData<User[]>( "https://lnkd.in/gRsbj6mc" ); // Posts API const postRes = await fetchData<Post[]>( "https://lnkd.in/guysyxTE" ); // Fully typed userRes.data[0].name; postRes.data[0].title; Why This Matters: 1. One reusable API layer 2. Strong typing across endpoints 3. Better DX (autocomplete + error catching) 4. Scales well in React / Next.js apps
To view or add a comment, sign in
-
How I think about structuring state in a React application One of the most common sources of complexity in React apps is not components. It’s how we organize data. Over time, I’ve found it useful to think about state in three layers: 1. UI State This is the closest layer to the user. It represents transient, interaction-driven data: • isModalOpen • selectedTab • input values Example: A search input value that changes on every keystroke. This state is local, short-lived, and should stay close to the component. 2. Domain State This represents the core logic of your application. It’s not about the UI, and not directly about the server. It’s about what your system means. • current user permissions • selected items in a workflow • calculated business rules Example: A list of selected products in a checkout flow, including derived values like total price. This state is shared across features and defines how your application behaves. 3. Server State This is data that comes from your backend. • API responses • cached queries • remote resources Example: A list of products fetched from an API. This data is usually not owned by the frontend. It can be transformed or normalized, but its source of truth is external. The key is not to mix these layers. When UI state, domain logic, and server data are tightly coupled, the system becomes harder to reason about and harder to scale. But when each layer has a clear responsibility, everything becomes more predictable. Better structure leads to better decisions. And better decisions lead to systems that scale. In the comments, I added a small example showing these three layers working together.
To view or add a comment, sign in
-
-
Day 16/30 — React Journey useRef = the most underrated hook 🔥 Most developers treat useRef as “just for DOM access.” That’s a massive underestimation. Here’s the real power 👇 🧠 What useRef actually is A persistent, mutable container that survives re-renders WITHOUT triggering a re-render when updated. 👉 Think of it as: “A memory slot inside your component that React ignores for rendering.” ⚡ Why this matters React’s rendering model is state-driven Every useState update = re-render But not everything needs to re-render. That’s where useRef dominates. 🚀 Core superpowers 1. Persist values across renders Store previous values Cache expensive calculations Track lifecycle-like data 2. Zero re-render updates Update values without UI refresh Perfect for performance-sensitive logic 3. Direct DOM control Focus inputs Measure elements Control scroll/animations 4. Escape hatch from React lifecycle Store mutable flags (isMounted, timers, etc.) Avoid unnecessary state complexity 🧩 When to use useRef vs useState Use Case Hook Affects UI useState Internal tracking useRef Needs re-render useState Needs persistence only useRef ⚠️ Where people go wrong ❌ Using useRef instead of state for UI data ❌ Expecting UI to update when ref changes ❌ Overusing it → breaks React’s declarative model 💡 Advanced insight (this is where pros differ) useRef is how you: Prevent stale closures Stabilize values across renders Handle imperative logic inside declarative UI It’s the bridge between: 👉 React’s declarative world 👉 Real-world mutable behavior 🔥 Bottom line useState drives UI useRef drives behavior behind the scenes Master useRef and you stop fighting React’s render cycle — you start controlling it. Save this. Most devs learn this too late.
To view or add a comment, sign in
-
-
If you’re new to Angular, change detection is one of those topics that feels like magic at first, until your app starts slowing down and it’s not clear why. Here’s a simpler way to think about it. Old approach using Zone.js: Angular assumes that whenever something asynchronous happens, it should run change detection across the component tree. Even a small interaction like a click can trigger checks across much of the application, which can become expensive. Better approach with OnPush: With OnPush, Angular only checks a component when specific triggers occur, such as input reference changes, events. This improves performance, but it requires a bit more discipline. You should prefer immutable data rather than mutating objects, rely on tools like the async pipe, and be more deliberate about how data flows through your app. Modern approach with Signals: Signals introduce a more fine grained model. Angular tracks which reactive values each part of the UI depends on. When that data changes, only the affected parts update. This reduces the need for full component tree checks and avoids unnecessary re renders. My takeaway: The default approach in Angular is easy to start with, but it can cause performance issues as an application grows. OnPush encourages better patterns and more control. Signals feel like a more natural direction, offering a simpler and more precise way to manage updates.
To view or add a comment, sign in
-
𝗬𝗼𝘂 𝗝𝘂𝘀𝘁 𝗡𝗲𝗲𝗱 𝗘𝗻𝘁𝗶𝘁𝗶𝗲𝘀 𝗧𝗵𝗮𝘁 𝗖𝗮𝗻 𝗗𝗶𝗲 Forms break the Single Source of Truth rule. Not because the rule is bad. Because forms are alive. They change with every keystroke. They get submitted and thrown away. We learned this with Redux. Putting form state in the global store felt right. It was a mistake. The mistake was not the global part. It was the forever part. - The store kept form data after submission. - Opening an old form showed stale data. - Every keystroke updated the whole store. Typing felt slow. Formik fixed this. It kept form state inside the component. React Final Form improved it. It let only changed fields re-render. Both hid the problem. They moved form state out of the global store. They did not fix its lifetime. The real choice is not global vs local. It is alive vs dead. - Alive state lives only as long as the form lives. - Dead state lives forever. Redux-form never killed form entities. Formik isolated them. Inglorious Web destroys them. In Inglorious Web, a form is an entity. You create it when needed. You submit it. Then you destroy it. api.notify('remove', entity.id) The state vanishes. No reset needed. No stale data. The form disappears from the UI automatically. This makes forms inspectable. While alive, you see all its state in dev tools. Any other part of the app can read it. No callbacks. No context. For long-lived forms, like a settings page, you just do not destroy the entity. Its state stays until the user leaves. You must decide when things die. This is a visible cost. The alternative is hidden costs like stale data and slow renders. Multi-step forms? Model each step as its own entity. Or use one entity with a step property. Destroy the whole entity when done. The store stays flat. No complex nesting. React Final Form's FormSpy exists because React re-renders parent components on any child state change. You need a subscription system to avoid that. In Inglorious Web, the rendering model is different. A field change updates only that field's DOM node. No virtual DOM. No subscription system needed. Re-renders are cheap. This changes the design. You do not need memoization by default. The framework handles the performance. Redux-form was right about the principle. Wrong about the lifecycle. Formik and React Final Form solved the symptom. They moved the state. The tension between volatile forms and persistent state was never solved. It was hidden. Inglorious Web makes the lifetime explicit. Create an entity. Destroy it when done. The Single Source of Truth principle works fine for forms. You just need entities that can die. Source: https://lnkd.in/gTFVVPep
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development