Go into the (Data Center) light
We are arguably in the largest datacenter transition to occur since datacenter was a word. The changes can be like a bright light with only fuzzy details of what is actually happening inside. Some people argue that all the cloud and new data center discussions people are having are nothing more than the natural re-centralization of assets after the disaggregation from the old mainframe days. Others may simplify believe clouds to just a big datacenter somewhere else. While it is easy to see where both of these beliefs could come from, they are both over simplifying the real changes we are seeing in the fundamentals of how to build Data Centers.
It seems as if every month a new technology company is emerging from stealth mode and even more surprising, we are seeing a significant number of recent IPO filings in the field. Why is this? Almost without exception, the companies that are succeeding are companies that have found a way to radically simplify their chosen space, often by abstracting control and standardizing hardware. In order to create the hyper-scale infrastructure required to run a public cloud and keep the management within a reasonable budget, these providers created strict hardware standards often completely separate from any proprietary technologies. To do this they largely adopted a standard building block of compute and storage together using only standard commodity hardware. This was the stark opposite of what many large enterprises’ where doing at the time. This practice was the seed for “hyperconverged” companies like Nutanix and Simplivity to begin to bring this concept to traditional on-premise IT shops.
More than standard hardware is required to be able to manage the Zettabytes of information the public cloud vendors are supporting. They needed to be able to quickly move/rebalance work to be able to both perform operational activities like infrastructure upgrades and to minimize the effect the failures inherent in hardware at that scale. Luckily Moore’s law has essentially held constant and standard x86 CPU’s can now do what recently was only able to be performed in proprietary custom ASICs. Standard commodity hardware is now powerful enough to support hardware abstraction to a sufficient level to provide the opportunity to make the hardware infrastructure appear generic and standard to the software that is now performing most of the control.
Generically this transition is the “complete” abstraction of infrastructure control into software. In today’s future ready data centers hardware is simply a tool to perform the actions of software. This is not a new idea though. We have been implemented virtualization for specific functions for decades; what is enabling the broader change now is the ability to abstract or virtualize the controls of complete hardware solutions and every part within it. What does this mean to you? It likely means that you have been inundated with marketing by all the existing manufactures and new startups telling of how they are the best technology since Ethernet and they will save you 500% over the products life cycle. While sometimes this is true, it is difficult to decipher the fear, uncertainty and doubt.
In general I suggest that all DC technology be evaluated on these four guidelines:
1. Does this significantly reduce my cost to do business?
2. How easy is the technology to scale 2x as far as I think I will need?
3. Is this technology standards based?
4. Will this technology restrict other technology I may want to use?
Does the new technology landscape look like a blinding light with fuzzy details? That’s understandable and common but it is worth further investigation. Keep the above guidelines in mind and find a good partner who knows the world of these emerging Data Center technologies. There are significant savings that can be realized for nearly every business in these technologies. Enjoy the trip through the light.