Go into the (Data Center) light

Go into the (Data Center) light

We are arguably in the largest datacenter transition to occur since datacenter was a word. The changes can be like a bright light with only fuzzy details of what is actually happening inside. Some people argue that all the cloud and new data center discussions people are having are nothing more than the natural re-centralization of assets after the disaggregation from the old mainframe days. Others may simplify believe clouds to just a big datacenter somewhere else. While it is easy to see where both of these beliefs could come from, they are both over simplifying the real changes we are seeing in the fundamentals of how to build Data Centers.

It seems as if every month a new technology company is emerging from stealth mode and even more surprising, we are seeing a significant number of recent IPO filings in the field. Why is this? Almost without exception, the companies that are succeeding are companies that have found a way to radically simplify their chosen space, often by abstracting control and standardizing hardware. In order to create the hyper-scale infrastructure required to run a public cloud and keep the management within a reasonable budget, these providers created strict hardware standards often completely separate from any proprietary technologies. To do this they largely adopted a standard building block of compute and storage together using only standard commodity hardware. This was the stark opposite of what many large enterprises’ where doing at the time. This practice was the seed for “hyperconverged” companies like Nutanix and Simplivity to begin to bring this concept to traditional on-premise IT shops.

More than standard hardware is required to be able to manage the Zettabytes of information the public cloud vendors are supporting. They needed to be able to quickly move/rebalance work to be able to both perform operational activities like infrastructure upgrades and to minimize the effect the failures inherent in hardware at that scale. Luckily Moore’s law has essentially held constant and standard x86 CPU’s can now do what recently was only able to be performed in proprietary custom ASICs. Standard commodity hardware is now powerful enough to support hardware abstraction to a sufficient level to provide the opportunity to make the hardware infrastructure appear generic and standard to the software that is now performing most of the control.

Generically this transition is the “complete” abstraction of infrastructure control into software. In today’s future ready data centers hardware is simply a tool to perform the actions of software. This is not a new idea though. We have been implemented virtualization for specific functions for decades; what is enabling the broader change now is the ability to abstract or virtualize the controls of complete hardware solutions and every part within it. What does this mean to you? It likely means that you have been inundated with marketing by all the existing manufactures and new startups telling of how they are the best technology since Ethernet and they will save you 500% over the products life cycle. While sometimes this is true, it is difficult to decipher the fear, uncertainty and doubt.

In general I suggest that all DC technology be evaluated on these four guidelines:

1. Does this significantly reduce my cost to do business?

    2. How easy is the technology to scale 2x as far as I think I will need?

3. Is this technology standards based?

4. Will this technology restrict other technology I may want to use?

Does the new technology landscape look like a blinding light with fuzzy details? That’s understandable and common but it is worth further investigation. Keep the above guidelines in mind and find a good partner who knows the world of these emerging Data Center technologies. There are significant savings that can be realized for nearly every business in these technologies. Enjoy the trip through the light.

To view or add a comment, sign in

More articles by Daniel Ewing

  • Dedicated Cloud Alliances?

    Interviews for channel and alliances roles this time around are highlighting a major difference in our industry. Some…

    2 Comments
  • Do One thing Well or All things in One?

    In the years leading up to major cloud adoption there was a major effort to consolidate and standardize roles to reduce…

  • Professional vs Personal Friendships

    Last weekend my wife and I helped a couple friends move apartments. Between having a truck and being a relatively large…

    3 Comments
  • RASP - What and Why

    Runtime Application Self Protection or RASP was first introduced in 2012 as a security category by Gartner, but didn’t…

  • Friendship and Sales

    Most/all of us require some separation work and life. How does this work when it's your job to build personal…

  • Covid Musings

    Todays another 'normal' day. Wake up, walk to my office, try to talk to prospective customers and keep existing…

  • DevOps Security for Dummies – Part 2

    Response to part 1 was way larger than I expected. Primarily wrote to help me organize my own thoughts and hope to help…

  • DevOps Security for Dummies – Part 1

    Disclosure: I am not a Developer or a Security Engineer. I do get to talk with many of the most advanced companies and…

    5 Comments
  • Simplify to Save

    Most of us in IT spend our time with a technology focus. Whether its finding a solution to the latest business need or…

  • Why Azure?

    As a Managed Services Provider, we work with customers to select the best services to support their IT and make it…

Others also viewed

Explore content categories