Distributing to better scale

Distributing to better scale

The Holy Grail for any CIO is to be able to satisfy users and customers alike by deploying services and applications quickly. But this dream will remain out of reach for many as long as CIOs are hampered with legacy infrastructures burdened with heavy maintenance. These consume both human and financial resources, leaving little room to develop, deploy and innovate. In a world of monolithic applications that evolve at best every two or three years, assembling hardware and software resources to host a new application was not necessarily a problem. It was easy to get a few new machines, add an OS (old or new), a little network and a hint of storage... There was no need to worry as time was on our side! The problem arose when success hit, when the number of users increased brutally: Everything then had to be  thrown away to start again from scratch. Suddenly there was a need for bigger machines, bigger disks, bigger bandwidth... and more humans to assemble and give life to a coherent ensemble able to accommodate new developments. 

Be more agile with Distributed Computing

It doesn't take a genius to figure out that this approach is unrealistic in today’s environment: It is simply too expensive and it lacks responsiveness. It can even be dangerous. Let me explain: ensuring the resilience of increasingly heavy and complex siloed environments has become so difficult that protecting them has become a real challenge. Bigger is simply not better. By betting on bigger machines you run the risk of not being able to restart on time after a failure. And while the servers are down, users are not able to work and customers cannot place new orders.  

As for the ability to scale up in a world where the Internet has turned everything upside down with its millions of users and its frantic pace... forget it! Today, thinking that all problems can be solved by ever larger machines is an illusion, whether from a financial or scalability perspective. 

This is why it is essential to rethink the infrastructure from a new angle that brings more scalability, more reactivity but also simplifies maintenance and administration. The solution? We need a D-I-S-T-R-I-B-U-T-E-D Architecture! 

As the major cloud providers have demonstrated, a distributed approach is key. It is best to rely on a widely distributed infrastructure with services built on distributed software deployed on standardized servers that are cheaper and stackable at will. When new needs arise - increased performance, need to deploy a new application, etc, they can be satisfied by adding new blocks in a pure "Scale Out" spirit. To do better and more, you just have to add basic stackable elements that will be instantly recognized and absorbed by the software layer assembling the whole. 

Unleashing the full potential of scale-out

The first advantage of scaling out is the elimination of resilience and performance issues: Using a platform built on distributed services eliminates the old  bottlenecks that could cause performance drops. Il also makes the infrastructure natively tolerant to the failure of discrete components. This is the base of Nutanix's strength compared to all its competitors: all the platform's services are designed to be distributed, whether they are basic services, the Prism administration console, storage services, databases as a service, security, etc. Freed from scalability and resilience issues, the IT team can quickly deliver the services needed for its company’s digital transformation.

By the way, this distributed approach to hyperconvergence also gives CIOs new financial leeway to invest in new projects, by spreading the acquisition of systems and licenses over time, and significantly reducing the cost of operations and maintenance.

To view or add a comment, sign in

More articles by Sylvain Siou

Others also viewed

Explore content categories