THE FUTURE OF MEMORY IS ALREADY HERE

Rumor has it Bill Gates once said that 640KB is more memory than anyone could ever need.

Certainly, he wasn’t using a smart phone (at the time) to check Facebook or find his way around an accident on the freeway.

Fun fact: Google Maps has over 20 petabytes of data, which is equal to approximately 21 million gigabytes! I think you’re seeing the trend here: Big Data!

Today’s businesses have applications processing tens of terabytes of information, much of it in real-time. IT organizations are moving beyond memory constraints by scaling out their infrastructures, but due to the price of DRAM, scaling memory can become cost prohibitive. The demand for memory is outstripping supply, and it’s not slowing down. Organizations must find a more flexible and budget-conscious way to address memory requirements, today and tomorrow.

Over the past several years, vendors have promised to expand capacity, reduce costs, and optimize performance with Storage Class Memory. That sounds wonderful, but where is it and when will it actually become fully available?

Let’s take Intel’s 3DxPoint, for example. The technology was first announced in 2015 but due to several delays, it’s unlikely to be here as enterprise-class memory before late 2018. So the wait continues. Clearly, there are some real challenges to address. After all, if it were easy, everyone would be doing it, right?

But business needs and IT requirements aren’t going to wait. Organizations must find a way to keep up with the ever-increasing need for bigger memory footprints while containing costs as they scale. Databases, analytics, data processing, and virtual machine hosts are all examples of applications that require massive amounts of memory that cannot be satisfied with DRAM alone – both from a capacity and cost standpoint.

Let’s take Graph analysis, for example. Graph analysis is used on Big Data and applied to everything from social media networks, to flights between cities, to maps that automatically reroute your travel via the shortest path with the least amount of traffic. These cases are all about analyzing the connections between lots of end-points to detect meaningful relationships. Without fast and accurate analysis, Bill Gates could end up in South Carolina instead of NYC!

Graph datasets produce large amounts of interim or temporary data. It is common for a 200GB dataset to require 2 or 3TB of “in-memory” computational data, and larger datasets can use tens of terabytes. But moving all of that data between storage and memory is extremely inefficient. Think NUMA architectures vs the PCI BUS. Bill needs updated routes quickly!

A hybrid approach using DRAM and Flash bridges the gap between memory and storage at a realistic price point. It can be implemented in such a way that the OS and applications access one large pool of byte-addressable system memory, with DRAM for performance and enterprise durability, and Flash for capacity and affordability.

In order for this to work, sophisticated software must understand how memory is being used, and manage the data placement accordingly. Metadata and application data must be positioned in the proper tier to ensure the highest efficiency and performance. Algorithms must be implemented to accurately analyze and predict usage patterns, pre-fetch data, and manage media writes for optimal performance and durability.

All of this amounts to bigger memory footprints, enabling more work per node for databases, analytics, data processing, cloud, and other applications. But what about cost? More work per node lowers costs through consolidation of compute, network, power, cooling, rack space, and other resources. With physical memory abstracted to effectively manage hybrid memory technologies, memory resources can now scale intelligently. This hardware-agnostic solution uses lower-cost technologies, while seamlessly incorporating new memory technologies and platforms going forward.

Storage Class Memory based on new media is coming. But Memory1 from Diablo Technologies delivers this hybrid memory approach today in a JEDEC compliant form factor. And, as new memory technologies emerge, Diablo’s DMX software can easily adapt to additional tiers and new types of media.

Why wait for the open-ended promises of future technologies, when there is a proven, affordable and high-performing solution available now?

And in case you’re wondering, thanks to Memory1, Bill made it to New York, right on time.

Tim Sheets, Senior Director Product Marketing






I remember when we were told 40MB would be as big as we would ever need. That's just a wow.....

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore content categories