The Battle for Your Data Center
Daniel Ewing

The Battle for Your Data Center

By far, the most common discussion I am having with IT leaders is how do we utilize "Clouds". This buzzword has been hot in the market for a few years and with the recent advances in Private, Hybrid, Public, and Hyper-scale clouds, it can be a complicated discussion.What are your data privacy requirements?            

   What workloads and at what scale do you run?                             How do manage product life cycles?                                     What is your available intellectual capitol?                                 Can you find the needed people with the needed intellectual capitol?            Do you prefer Capitol or Operational expense models?                       What is the impact of a failure on your business? 

These are some of the basic questions that can help a business decide what cloud model will work best for them. All of this is great and a recommended discussion for all companies to have. After weighing the pros and cons and consultation with a strong partner, your business decides their cloud model (Private, Hybrid, Public, or Hyper-scale). Now we get to the fun part. How do we build a solution that meets our business, performance, and economical needs?

       This is where the bulk of the data center war is occurring. It is widely accepted that the most likely cloud market outcome will be a fluid and diverse ecosystem of entities leveraging Private, Hybrid, Public, and Hyper-scale clouds to best meet there needs. Few parties will debate this. The debate is what is the best fundamental strategy to address this end. Over last 40+ years IT has focused on building bigger, better and safer hardware. This has led to several amazing best of breed products that have repeatedly proven Moore’s law not just in compute but also across the data center with amazing resiliency. In the last few years there has been a focused effort to consolidate these awesome beasts of equipment into a converged stack that has greatly simplified their implementation and management.

           I new idea has gained traction in the marketplace in the last 5 years or so. Although the idea is relatively old, the need and technology has finally started to catch up. Spearheaded by universities and the Hyper-scale crowd, there is a move to a large number easily replaceable less expensive devices. Instead of keeping up on the front end of Moore’s law progression, these parties are intentionally falling back on the curve to the lower performing, less reliable, less expensive equipment. They are able to do this because they deploy a high number of devices, each with a small portion of the total workload. They expect a shorter mean time to failure. When failures occur, the impact is limited to the small percentage of the total workload. This solution allows companies Amazon Web Services and Azure to cheaply deploy vast pools of resources gathered from relatively cheap, low power hardware. This “Scale-out” architecture relies on its particular load balancing mechanisms to intelligently and reliably distribute the work. When a node fails, the load is simply reallocated to another node with minimal service disruption because of the relative node size.

           Many customers who want a Private Cloud will read case studies from Facebook, Google, and Amazon and see that they are building their Clouds using large number of inexpensive devices, they assume that this must be the best model. Companies like Facebook, Google and Amazon have immense scale and armies of PhD’s to optimize every last drop out of their resources. Most customers cannot compete on either of these grounds. The large manufactures and startups alike know this. There is not a significant IT manufacturer in the market today (or tomorrow) that is not trying to figure out how to take as much market-share as possible in this new Cloud marketplace.

           VMware is arguably the leader in the charge to bring customers the ability to use higher numbers inexpensive hardware to meet their needs. This is a natural extension of VMware’s thinking and strategy. They helped revolutionize the computing market with their virtualization technologies by putting a thin layer of software on hardware that can divide the physical resources to multiple virtual servers. VMware is now putting a thin layer of software over all data center hardware. With this design they are able to abstract control and management of the datacenter off of the physical hardware. With the management and control off the hardware, the hardware becomes less important and customers can buy less expensive pieces. For many customers, this method can make a lot of sense. Customers I see successfully adopting VMware’s cloud/virtualization strategy are customers with sufficient scale to see the benefit of the increased ease of management and enough available overhead to absorb the added inefficiencies introduced by abstracting the management and control into software.

       A not so obvious design consideration when deciding if you should implement powerful and resilient devices or a higher number of less resilient and more affordable devices is the availability and requirements of supporting resources. Many small nodes allow the opportunity for very specific services to be applied at a lower cost per instance. For example VMware’s NSX allows you to use micro-segmentation to literally place a firewall on every virtual port if you want to. In general it is easier to insert services and actions when processes are divided up into small chunks but remember that every action added in software adds at least 1 more action in hardware. If the underlying hardware can serve the additional requests on the same host at bus speeds, the impact will be negligible. But depending on the physical design of the data center, needed services may be several physical hops away. The idea of a software defined, highly mobile and flexible data center is a great idea and can benefit many customers. One cannot forget that simply introducing an abstraction layer does not remove the underlying physical capabilities. Customers must be aware of the impact possibly non-deterministic performance will have on their workloads and business success. 

           The counterargument to high number of affordable devices is powerful, resilient hardware deployed with more traditional IT strategies while still taking advantage of the centralization of management required for cloud offerings. This side of the argument is largely championed by Cisco. The networking titan has essentially bet the farm on its version of a Virtual Data Center and Clouds. Cisco is one of the first to point out the inefficiencies and failure-rate of the white-box/inexpensive hardware based, completely software defined solution championed by VMware. Cisco’s strategy is not to simply ignore the hardware and do everything in software, but to maintain its high hardware requirements and performance while simply abstracting the management to a central controller. I like to use the metaphor of having a meal in the park where the ground is the physical layers. VMware gives customers a nice blanket to set up on and says you can put it anywhere you want. Cisco makes the ground meet the customer needs for the meal. You can put a blanket over a hole, rock or other inefficiency in the physical layers but the results will not be the same as putting the solution over solid physical resources.

           Cisco and other traditional IT companies like Juniper, HP and HDS see the transition to Cloud, not as a largely disruptive change but as the natural progression of IT. They are offering options that ease the mental transition to cloud. These offerings leverage the physical advantages products like Cisco’s Nexus and UCS stack have over lower price/white-box products while adding in much of the advantages introduced by the cloud movements. These companies are creating automation stacks that can compete with traditional IT in one configuration and with a few changes compete with the most “revolutionary” completely software defined cloud. Cisco, for example, has introduced the Nexus 9000 series of switches that leverage the industry standard (non-proprietary) Triton 2 chip set to bring down cost while still maintaining most of its technological advantages. With the new N9k, customers can deploy Cisco’s software defined networking offering Application Centric Infrastructure (ACI). With ACI, customers can design workflows to automate management and operations in much the same way as other cloud offerings while ensuring optimal efficiency and reliability in the physical layers.

      While it is relatively easy to see why many new companies want to say that cloud is disruptive and established companies want to say cloud is the natural progression, neither is really wrong. There are resource intensive workloads that require resource heavy infrastructure. There are workloads that can take advantage of the ability to parallelize workloads across a high number of machines. There are companies and industries that have chain of custody and data sovereignty issues that benefit from centralized, determined data-pathing. There are companies that want to simply swipe a credit card and expand when needed. What is right for you? The journey to the cloud has many decisions and is different for every organization. Be sure to partner with a team that not only knows the state of the industry but also takes time to know your company and your needs.

Daniel Ewing - Daniel.Ewing@Insight.com - (425) 977-5395 - Field Account Executive

To view or add a comment, sign in

More articles by Daniel Ewing

  • Dedicated Cloud Alliances?

    Interviews for channel and alliances roles this time around are highlighting a major difference in our industry. Some…

    2 Comments
  • Do One thing Well or All things in One?

    In the years leading up to major cloud adoption there was a major effort to consolidate and standardize roles to reduce…

  • Professional vs Personal Friendships

    Last weekend my wife and I helped a couple friends move apartments. Between having a truck and being a relatively large…

    3 Comments
  • RASP - What and Why

    Runtime Application Self Protection or RASP was first introduced in 2012 as a security category by Gartner, but didn’t…

  • Friendship and Sales

    Most/all of us require some separation work and life. How does this work when it's your job to build personal…

  • Covid Musings

    Todays another 'normal' day. Wake up, walk to my office, try to talk to prospective customers and keep existing…

  • DevOps Security for Dummies – Part 2

    Response to part 1 was way larger than I expected. Primarily wrote to help me organize my own thoughts and hope to help…

  • DevOps Security for Dummies – Part 1

    Disclosure: I am not a Developer or a Security Engineer. I do get to talk with many of the most advanced companies and…

    5 Comments
  • Simplify to Save

    Most of us in IT spend our time with a technology focus. Whether its finding a solution to the latest business need or…

  • Why Azure?

    As a Managed Services Provider, we work with customers to select the best services to support their IT and make it…

Others also viewed

Explore content categories