The Final Fast Data Frontier

The Final Fast Data Frontier

Fast data is the future of data management architectures. Once the decision to move towards fast data has been made, there is one more important step: choosing a fast data technology. While the choices made in all phases of the architecture are important, there must be special attention paid to the choices for the fast data portion of the system.

Three technology categories can be considered as the core components for the fast data portion of the enterprise data architecture. Although all are highly capable systems, there are pros and cons to each one, with some being better suited to meet the broad requirements of fast data. The categories are:

  • Fast OLAP Systems

 

New in-memory OLAP systems are able to drastically reduce reporting times and enable near real-time analysis of fast-arriving data. Some of these systems also have the possibility to ingest data extremely quick. However, OLAP solutions are designed to be analytic engines and are not meant to make decisions on individual events entering the system. The ability to provide transactions at the point of data entering the architecture is the primary value achieved by the fast data portion of the architecture.

  • Stream Processing Systems

 

Stream processing approaches, including complex event processing (CEP), has proven valuable in specific industries where very specific patterns and timings need to be identified, such as capital markets trading. Stream processing provides scalable message processing and coordination between systems, but does not maintain data state. As a result, similar to OLAP, stream processing systems are limited in the way in which they are able to interact with events entering the pipeline. All context of other data is lost with stream processing because it wasn’t designed to serve the needs of modern fast data applications. Therefore, it tends to be a poor match despite efforts to write additional code in attempt to perform continuous computations (real-time analytics), which just ends up complicating the system.

  • Operational Database Systems

 

By definition, operational database systems are designed to support per-event decision-making. Although operational databases are the standard for interactive applications, they are historically unable to meet the performance required of fast data use cases. In-memory, NewSQL systems that are now available are capable of meeting performance standards, as well as delivering full dataset analytics. These systems were designed with fast data in mind, so the integration with big data is typically already built in.


Application developers and technical managers involved with building fast and big data applications have a number of technology alternatives to consider, and above were the three architectural approaches to delivering fast data. All three are written about in greater detail in VoltDB’s eBook “Fast Data and the New Enterprise Data Architecture,” so take a look to learn more about the different fast data technology choices.

People, please don't forget Enormous Data & Super Fast Data.. ok ? Just sayin'.

Nevertheless, the idiom has since taken on new life in pop lyrics. In The Police's 1983 single "Wrapped Around Your Finger," the second line uses it as a metaphor for being in a dangerous relationship; this is reinforced by a later mention of the similar idiom of "the devil and the deep blue sea."[8][9] American heavy metal band Trivium also referenced the idiom in "Torn Between Scylla and Charybdis," a track from their 2008 album Shogun, in which the lyrics are about having to choose "between death and doom."[10] -- uh oh! :(

ScyllaDB will kill you, sorry for that ;-)

To view or add a comment, sign in

More articles by Michael Pogany

Others also viewed

Explore content categories