I deleted every index.ts file in my project. Here is why. 🗑️ For years, I was obsessed with "Clean Imports." I wanted my code to look like this: import { Button, Input, Modal } from './components'; So I created an index.ts (Barrel File) in every folder to re-export everything. It felt satisfying. It felt organized. Then my project grew. 📈 Suddenly, I started hitting weird issues: 1. Jest tests got slow: I would test a simple Button, but Jest would crash because the index.ts was also trying to load the Modal (which had a huge library dependency). 2. Circular Dependencies: Module A imports from index, which imports Module B, which imports Module A... infinite loop. 🔄 3. Broken Tree-Shaking: Webpack/Vite struggled to remove unused code because the Barrel file linked everything together. ❌ The Trap (Left Image): Barrel files force your tooling to process every single file in a folder, even if you only need one small function. ✅ The Fix (Right Image): Direct Imports. import { Button } from './components/Button'; Yes, the import path is slightly longer. But in exchange, you get: * Faster CI/CD pipelines. ⚡ * Instant unit tests. * Zero circular dependency errors. "Clean code" isn't just about how it looks. It's about how it runs. Are you still using Barrel Files, or have you banned them too? 👇 #TypeScript #React #WebPerformance #SoftwareArchitecture #FrontendEngineering #CodingBestPractices #JavaScript #Vite #DeveloperExperience
Deleting Barrel Files: Faster Jest Tests and Fewer Errors
More Relevant Posts
-
In WASM, the painful part isn’t when it crashes. It’s when it crashes silently. When 🦀 Rust crosses into WebAssembly, you get a new frontline: the error and type boundary between Rust and JavaScript. If that boundary isn’t disciplined, bugs turn into “something went wrong” with no context, no repro path, and no way to debug quickly. In a small Rust↔JS module, I try to keep that boundary in shape with three simple moves: • Safe execution — wrap calls with `catch_unwind` so a panic becomes `Result<_, JsValue>` with context instead of undefined behavior in JS. • Safe deserialization — decode any incoming `JsValue` into a Rust type via `serde_wasm_bindgen` and return failures as labeled `JsValue` errors. • Transparent debug logging — when a debug flag is on, log decoded/result values; otherwise, stay quiet. This isn’t “perfect architecture”. It’s a pragmatic contract: errors should be returnable, labeled, and diagnosable. Curious how others handle this in Rust↔JS/WASM: do you prefer string messages at the boundary (simple), or structured error objects (more work, but a stable contract)?
To view or add a comment, sign in
-
z-index: 9999. We've all been there. A dropdown hides behind a modal. You bump it to 9999. Next sprint someone adds a tooltip — 99999. Then a toast notification — 999999. Nobody knows the stacking order anymore. The fix took me 30 seconds: $z-layers: ( dropdown: 100, sticky: 200, modal: 300, tooltip: 400, toast: 500, ); .modal { z-index: map-get($z-layers, modal); } One SCSS map. The entire stacking order in one place. The real problem was never z-index itself — it was that every developer was making stacking decisions in isolation. A shared map turns an implicit convention into an explicit contract. I started looking for other places where a simple data structure could replace scattered magic numbers. Turns out SCSS has a lot more to offer than most of us use. What's the worst z-index you've seen in a codebase? #frontend #css #scss #webdevelopment #cleancode
To view or add a comment, sign in
-
-
Resolve the 'Element type is invalid' error in React when using react-slick. Analyze the CommonJS/ES Module interop failure and fix it using targeted Babel configuration.
To view or add a comment, sign in
-
Day 71 of me reading random and basic but important dev topicsss..... Yesterday I read about the theory behind Macrotasks and Microtasks. Today, I read the practical, architectural pattern: Splitting CPU-hungry tasks. Imagine you have a massive task - like syntax highlighting 100,000 lines of code or formatting a massive dataset. If you run this in a single standard function, the JS engine locks up. The DOM won't paint, and clicks are ignored. The Solution: Chunking with setTimeout We can unblock the UI by splitting the massive job into smaller macrotasks using zero-delay setTimeout. Instead of running a loop to 1,000,000,000 all at once: 1. Process the first 1,000,000 items. 2. Call setTimeout(chunk, 0) to schedule the next batch. 3. Return control to the Event Loop. Because we returned control, the engine can now process pending clicks, handle network responses, and crucially Render the DOM. Use-Case: Progress Indicators Because DOM updates only paint after the currently running task completes, a standard heavy for loop updating a div.innerHTML will only show the final 100% state. The user sees a frozen screen, then a jump to completion. By splitting the task via setTimeout, the browser gets a chance to render the intermediate states, allowing us to build buttery-smooth loading/progress bars purely in single-threaded JS.... Tip for Maximum Performance: When chunking, move your setTimeout scheduling to the beginning of your function block rather than the end. The browser enforces a minimum delay of ~4ms for nested setTimeouts. Scheduling it immediately means the browser starts counting that 4ms delay while your current chunk is executing, drastically reducing overall execution time! Keep Learning!!!! #JavaScript #Performance #SoftwareArchitecture #WebDev #Engineering
To view or add a comment, sign in
-
-
✅ Solved LeetCode: Path Sum (112) — Alternative Approach Implemented a clean recursive (top-down) DFS solution in JavaScript using a subtractive strategy. Approach: - If the node is null, return false. - If the node is a leaf node, simply check: - root.val === targetSum - Otherwise: - Recursively call the function on left and right subtrees. - Pass targetSum - root.val to reduce the remaining required sum. - Return leftResult || rightResult. This approach eliminates the need for an external variable and keeps the recursion concise and expressive. Time Complexity: O(n) — each node is visited once Space Complexity: O(h) — recursion stack space (where h is tree height) A clean divide-and-conquer way to solve Path Sum efficiently! 🌳
To view or add a comment, sign in
-
-
𝐂++ 𝐚𝐧𝐝 𝐉𝐚𝐯𝐚 𝐝𝐞𝐯𝐬 𝐠𝐞𝐭 𝐛𝐮𝐢𝐥𝐭-𝐢𝐧 𝐜𝐨𝐩𝐲 𝐜𝐨𝐧𝐬𝐭𝐫𝐮𝐜𝐭𝐨𝐫𝐬. 𝐉𝐚𝐯𝐚𝐒𝐜𝐫𝐢𝐩𝐭 𝐝𝐞𝐯𝐬 𝐠𝐞𝐭 𝐭𝐫𝐮𝐬𝐭 𝐢𝐬𝐬𝐮𝐞𝐬. 🚩 Let’s talk about constructors—the factory blueprints for our objects. In JavaScript, setting up a constructor is pretty straightforward. You have your 𝐍𝐨𝐧-𝐏𝐚𝐫𝐚𝐦𝐞𝐭𝐞𝐫𝐢𝐳𝐞𝐝 constructors (handing out default values like participation trophies) and your 𝐏𝐚𝐫𝐚𝐦𝐞𝐭𝐞𝐫𝐢𝐳𝐞𝐝 constructors (passing in actual, useful data). But then you try to clone an object. JavaScript looks at you, shrugs, and says, "Figure it out yourself." Unlike other languages, JS doesn't have a built-in copy constructor. If you lazily type `const newObj = oldObj;`, you didn't create a copy. You just created a new reference to the exact same memory location. The moment you change `newObj`, you mutate `oldObj`, your UI breaks, your tests fail, and your PM is asking why the dashboard is upside down. To truly copy an instance of a class, you have to build your own `copy()` method to explicitly return a `new` instance with the cloned data. Stop trusting the assignment operator (`=`). It is lying to you. 𝐂𝐨𝐧𝐟𝐞𝐬𝐬𝐢𝐨𝐧 𝐭𝐢𝐦𝐞: 𝐇𝐨𝐰 𝐦𝐚𝐧𝐲 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐢𝐨𝐧 𝐛𝐮𝐠𝐬 𝐡𝐚𝐯𝐞 𝐲𝐨𝐮 𝐜𝐚𝐮𝐬𝐞𝐝 𝐛𝐲 𝐚𝐜𝐜𝐢𝐝𝐞𝐧𝐭𝐚𝐥𝐥𝐲 𝐦𝐮𝐭𝐚𝐭𝐢𝐧𝐠 𝐚 𝐫𝐞𝐟𝐞𝐫𝐞𝐧𝐜𝐞𝐝 𝐨𝐛𝐣𝐞𝐜𝐭 𝐢𝐧𝐬𝐭𝐞𝐚𝐝 𝐨𝐟 𝐚𝐜𝐭𝐮𝐚𝐥𝐥𝐲 𝐜𝐨𝐩𝐲𝐢𝐧𝐠 𝐢𝐭? 𝐋𝐞𝐭'𝐬 𝐡𝐞𝐚𝐫 𝐭𝐡𝐞 𝐡𝐨𝐫𝐫𝐨𝐫 𝐬𝐭𝐨𝐫𝐢𝐞𝐬 𝐢𝐧 𝐭𝐡𝐞 𝐜𝐨𝐦𝐦𝐞𝐧𝐭𝐬. 👇 #JavaScript #WebDevelopment #SoftwareEngineering #MERNStack #CodingHumor #DeveloperLife #TechTips #Programming
To view or add a comment, sign in
-
-
⚙️ Inside the V8 Engine: The Core Components That Run Your JavaScript The V8 engine is the JavaScript runtime behind Chrome and Node.js. It is not a single executor, but a pipeline composed of specialized components. At a high level, V8 is built around four core parts: Parser, Ignition, TurboFan, and Garbage Collector. Each one plays a specific role in turning JavaScript into fast machine code. 🔹 Parser The Parser is the entry point. It reads JavaScript source code and converts it into an Abstract Syntax Tree (AST). This step validates syntax and prepares the code for execution. No optimization happens here — it’s about structure and correctness. 🔹 Ignition Ignition is V8’s interpreter. It takes the AST and produces bytecode, which is then executed. While running, Ignition collects runtime feedback such as types and execution patterns. This data is critical for later optimizations. 🔹 TurboFan TurboFan is the optimizing compiler. Based on feedback from Ignition, it recompiles hot code paths into highly optimized machine code. It applies aggressive optimizations, but can also deoptimize if assumptions are broken. This is where most performance gains come from. 🔹 Garbage Collector (GC) The Garbage Collector manages memory. It allocates objects, frees unused memory, and minimizes pauses during execution. V8 uses generational GC strategies to balance throughput and latency. Together, these components form a continuous optimization loop: Parse → Interpret → Optimize → Collect → Repeat. Understanding this pipeline helps explain: • Why warm-up time exists • Why some code is faster than others • Why memory patterns affect performance JavaScript is fast not by accident. It’s fast because V8 is designed as an adaptive execution engine. #javascript #v8 #nodejs #runtime #performance #engineering #softwarearchitecture
To view or add a comment, sign in
-
-
✅ Solved LeetCode: Binary Tree Postorder Traversal (145) Implemented an iterative Postorder Traversal in JavaScript using a single stack and a lastVisited pointer, following the sequence: Left → Right → Root. The strategy works as follows: - Traverse to the leftmost node, pushing nodes onto the stack. - Peek the top of the stack to decide the next move: -If the right child exists and hasn’t been visited, move to the right subtree. -Otherwise, process (visit) the node and mark it as last visited. - Repeat until both the stack is empty and current node is null. This approach simulates recursion explicitly while avoiding multiple stacks. ⏱ Time Complexity: O(n) — each node is visited once 🧠 Space Complexity: O(h) — stack space based on tree height A more optimized and elegant iterative way to achieve Postorder traversal! 🌳
To view or add a comment, sign in
-
-
✅ Solved LeetCode: Binary Tree Postorder Traversal (145) Implemented an iterative Postorder Traversal in JavaScript using two stacks, following the sequence: Left → Right → Root. The strategy works as follows: - Use the first stack (s1) to process nodes in a modified preorder fashion (Root → Left → Right). - Push each popped node into a second stack (s2). - After traversal, pop nodes from s2 to get the correct Postorder sequence. This approach effectively simulates recursion while maintaining the correct traversal order. ⏱ Time Complexity: O(n) — each node is processed once 🧠 Space Complexity: O(n) — due to the use of two stacks A neat stack-based trick to achieve Postorder without recursion! 🌳
To view or add a comment, sign in
-
-
How JavaScript Engines Actually Make Your Code Fast (JIT + Hidden Classes)? We often say “JavaScript is slow because it’s interpreted.” But modern JS engines are insanely smart. Here’s what’s really happening under the hood: 🧠 1️⃣ JIT (Just-In-Time) Compilation Engines like V8 don’t just interpret your code. They: • Parse it • Convert it to bytecode • Monitor which functions run frequently (“hot” code) • Compile hot code into optimized machine code If assumptions break (like types changing), the engine de-optimizes safely and re-adjusts. 👉 Your code is constantly being optimized at runtime. 🧱 2️⃣ Hidden Classes (Object Shapes) JavaScript objects are dynamic. You can add properties anytime. But engines create internal “hidden classes” to track object structure. This allows: - Super fast property access - Better inline caching - Stable optimizations If property order changes dynamically, the engine can’t optimize as effectively. ⚡ 3️⃣ Inline Caching When you access obj.name, the engine remembers: - What shape did the object had - Where the property was stored Next time? It skips the lookup entirely. That’s why consistent object patterns matter more than we think. 💡 Takeaway: - Keep object shapes consistent - Avoid randomly adding properties later - Write predictable code - Don’t micro-optimise blindly — understand engine behaviour first Understanding this changes how you think about performance. We write JavaScript. But the engine decides how fast it runs. #JavaScript #Frontend #WebPerformance #V8 #SoftwareEngineering
To view or add a comment, sign in
-
Explore related topics
- Simple Ways To Improve Code Quality
- How to Achieve Clean Code Structure
- Building Clean Code Habits for Developers
- GitHub Code Review Workflow Best Practices
- Managing Dependencies For Cleaner Code
- How to Write Clean, Error-Free Code
- How to Improve Your Code Review Process
- How to Refactor Code Thoroughly
- Why Use CTEs for Cleaner Code
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
I know modern bundlers are getting smarter at tree-shaking barrels, but the Circular Dependency risk alone makes them a 'No' for me. Who else has wasted hours debugging a circular import loop? 🙋♂️