How to Select In-Memory Computing Technologies for Big Data

How to Select In-Memory Computing Technologies for Big Data

I recently published (with a little help from my colleagues Roxane Edjilali, Nick Heudecker, Keith Guttridge and Kurt Schlege. Thanks, guys!) a follow up to the Architecting for New Velocity Needs for In-memory Computing and Big Data report, which discusses how IMC technologies can be applied to analytics/big data projects.

Below a summary of the main findings. Enjoy!

How to Select In-Memory Computing Technologies for Big Data

To tackle digital business moments, organizations often must adopt in-memory computing capabilities to enable real-time analytics of ever-larger and faster-moving datasets. However, it's not so obvious for information management leaders which IMC technologies fit best with their business needs.

What's Going On?

  • Digital business enablers, such as social, mobile and the Internet of Things, give organizations an opportunity to make business decisions by analyzing ever-larger, but also faster-moving, datasets. This often challenges conventional information management infrastructures and technologies, including conventional big data platforms such as Hadoop.

  • In-memory computing technologies are designed to support ultrafast processing of datasets that are the size of tens of terabytes. However, due to a fragmented technology and vendor landscape, it's difficult for information management leaders to identify which IMC technology (or combination thereof) is the best fit for their big data analytics initiatives.

What Do You Need To Do?

  • Use high-performance messaging infrastructure for low-latency and high-scale transport of data on the move.

  • Adopt event stream processing for real-time pattern detection within data on the move.

  • Endorse in-memory data grids to efficiently absorb large volumes of data on the move.

  • Leverage in-memory DBMSs for faster and deeper analysis of data at rest.

  • Deploy in-memory analytics to enable self-service analytics of data at rest.

  • Consider the adoption of IMC platforms combining multiple capabilities.

Find the full research here (Warning: You must be a Gartner client to access the document)

Nice resume. Special mention to the objective of a big data: turn huge amount of data into information thru in memory analytics.

Like
Reply

Apache Spark is best for in-memory processing of Big Data

Like
Reply

To view or add a comment, sign in

More articles by Massimo Pezzini

Others also viewed

Explore content categories