Optimize JavaScript Performance with Sets and Array.includes()

💡 JavaScript Performance Tip: Most Devs Miss This Ever used Array.includes() to check if a value exists? It works, but at scale, there’s a hidden cost. What’s happening under the hood: - Array.includes() → Checks items one by one (O(n)) - Set.has() → Optimized for fast lookups (≈ O(1)) Why this matters: When your data size grows or membership checks happen frequently (loops, filters, validations), that small includes() call can quietly become a performance bottleneck. Rule of thumb: - Small list, few checks → Array.includes() is fine. - Large data, repeated checks → Convert to a Set and use has(). Takeaway: Performance optimization isn’t about overengineering. It’s about choosing the right data structure at the right time. Small change. Big performance win. #JavaScript #WebDevelopment #Programming #WebPerformance #CleanCode #DataStructures #Algorithms #React #NextJS #JS #ReactJS #Frontend #SoftwareEngineering #Coding #Tech #JavaScriptTips #WebDev

  • No alternative text description for this image

Abilash S Are you sure it's ~250ms vs ~1ms? Even if Array.includes() is slower, I don't think the difference should be that big for only a few items to check. Overall Set is definitely better for certain cases, but the setup cost also forces you to stick to it once used.

I love this kind of performance tip, it’s simple, practical, and makes a real difference. As a developer, I’ve definitely leaned on Array.includes() out of habit, but seeing the O(n) vs O(1) breakdown reminds me how crucial it is to choose the right data structure, especially when working with large datasets or frequent lookups. These small tweaks don’t just improve speed, they reflect thoughtful engineering. Posts like this help us all write cleaner, faster, and more scalable code. Thanks for sharing this with the community!

Like
Reply

for small dataset it can work because it solves the sorting and duplication problem, but Set<T> is not a silver bullet. Most of the optimization process depends on what kind of data we currently have, how many, how they are sorted and how much RAM we can sacrifice. In the given image sample you have a sorted 100k, try 10billion.

Like
Reply

Great, just a complementary perspective 👉 An O(1) lookup is a net loss if it's offset by an O(n) conversion and increased GC pressure. For small collections, L1/L2 cache locality often makes linear scans faster than the hashing overhead, and none of it matters if you're already inside a >50ms main-thread task.

See more comments

To view or add a comment, sign in

Explore content categories