Consider for Building a Multi-Cloud Solution
The vast majority of enterprises in 2017 are employing public cloud services to some extent. However, they don’t settle for just one cloud provider, but rather choose to deploy their workloads on multiple clouds. Recent research shows that enterprises are running their applications on an average of two public clouds, and are experimenting with another two public cloud providers (as well as multiple private clouds, which are out of the scope of this article).
In this article, we explore seven key challenges and considerations to be taken into account when choosing a multi-cloud strategy, such as vendor lock-in, optimal workload performance, security, data transfer, and others.
1. Resilience to Outages
If the 5-nines SLA is part of your business model, you can’t afford to be down, ever (5 nines = ~5 minutes of downtime per year). While rare, cloud outages can and have occurred and can be extremely harmful to your business. of even a single service can create a massive domino effect, as demonstrated by the AWS S3 failure back in 2015, which killed or disrupted eight different AWS services, and took down a plethora of other services that depend on those eight primary services. Having a multi-cloud strategy in place, with clearly defined DR and business continuity protocols that take advantage of the different cloud providers, can help mitigate—or completely dodge—the destructive effects of cloud outages on your business.
2. Vendor Lock-In
In the business it is never recommended to rely on a sole provider for your operations. The same your cloud provider: operating within a single cloud provider creates many business risks. These risks include dependence on the provider’s pace of innovation, dependence on the provider’s proprietary technologies and platforms, and the asymmetric data transfer pricing, which incentivizes users to transfer their data into the cloud, and discourages them from transferring it out of the cloud.
This lock-in to a single cloud provider also weakens your position when negotiating prices and terms with the cloud provider. Multi-cloud architectures can make sure that you aren’t dependent on a single provider’s technology.
3. Cloud Agnostic
In order to achieve a streamlined multi-cloud deployment, your applications need to be platform-agnostic. They can’t rely on any one provider’s proprietary service or technology, but rather on ubiquitous services that are available with any of the large cloud providers. This essentially means sticking to the basic IaaS. For example, on AWS you will want to stick with EC2, and S3 services, which are bare-bones infrastructure that can be ported to Azure VMs and blobs, but refrain from using services like AWS Even though the latter has a parallel Azure (Azure Cloud Functions), the two providers have different and proprietary implementations that may not port well.
This is easier said than done, though. Many services behave differently on different platforms, so you have to choose tools that are not dependent on higher-level, vendor-specific Sometimes companies decide, for performance reasons, not use specialty services and build their own solutions. The downside of this approach is that these companies will probably be building a lot of their own proprietary tools, and thus doing a lot of the firefighting normally left up to the cloud vendor. The upside is that many services provided by the cloud vendors are based on open source projects, so you can maneuver with some degree of ease.
Another thing to consider is the difficulty in hiring multi-cloud personnel. Cloud architects and engineers are hard to find as it and are usually proficient on only one cloud provider’s services. Finding and recruiting personnel that more than one provider can be both time- and budget-consuming.
4. Financial Complexity
One of the most complex issues when using a public cloud is billing. Monthly cloud bills are a virtually endless list of hard-to-comprehend line items, services, price rates, and tags. Determining the cost of an application, or the cost to be charged back from a business unit, is tedious and error-prone. This challenge becomes even more complex when multiple cloud bills arrive at the end of the month. You have to parse different line items from different bills, which are structured differently, have different pricing models, and varying price rates.
For example, Google Cloud Platform offers on-demand but applies sustained use discounts automatically. AWS, on the other hand, has Reserved Instance pricing, which, if fully paid upfront, won’t be charged for on a monthly basis. When looking at block storage (instance-attached volumes), AWS charges for I/O operations on their magnetic drives, while Google does not. The examples go on and on, and the complexity grows exponentially with the number of cloud providers you’re using.
To avoid this chaos, it is imperative to use multi-cloud financial management tools when employing a multi-cloud strategy. These tools aim to “normalize” all the different cloud bills into a single presentation. In addition to clarifying pricing and chargeback, this allows apples-to-apples pricing comparison between cloud providers and lets you choose the best-priced cloud combination for your needs.
5. External Business Requirements
Deploying a multi-cloud architecture is not always an internal business decision, but rather a requirement brought in by customers. Some customers have a preference for one cloud provider or another, and, depending on their size and significance to your business, can force you to deploy a dedicated copy of your application on the cloud of their choice.
Walmart, for example, sees Amazon.com as its main competitor in the retail industry. Out of concern for business espionage by if their data is stored on AWS, it requires all of its software / SaaS suppliers operating on AWS to deploy another copy of their product on a different cloud provider.
6. Connectivity
When operating within a single cloud provider, intra-cloud data transfer costs are usually very low (and very often free). This is not the case for data transfer outside the realms of a cloud provider. These transfers carry a significant price tag, and, when working with multiple clouds, the amount of inter-cloud data transfer grows and makes for a hefty sum at the end of the month.
As an example, AWS will charge $0.06 – $0.09 per GB of outbound data (i.e., data transferred from your cloud infrastructure onto the Internet), whereas Microsoft Azure will charge between $0.09 and $0.13 per GB of outbound traffic. Depending on the use case, you could have a large flow of data between the clouds (e.g., replicating databases), which you would consider intra-cloud within your multi-cloud. However, the two providers consider each other to be “something residing on the and will charge for outbound traffic on each of their respective sides.
What you might consider intra-cloud traffic within your multi-cloud is actually heaps of outbound data transfer from both clouds to the Internet
Another connectivity consideration is the case of a cloud outage. Even during huge outages, traffic inside the cloud works, i.e., your AWS virtual machines can talk to each other. But that is not the case for inter-cloud connectivity. Be wary of interconnectivity issues and how they can affect your infrastructure: If you do have to sync the service between clouds, use asynchronous systems as much as you can, pay for better connectivity when possible, and don’t let any cloud-specific service become a single point of failure.
Another connectivity consideration is the case of a cloud outage. Even during huge outages, traffic inside the cloud works, i.e., your AWS virtual machines can talk to each other. But that is not the case for inter-cloud connectivity. Be wary of interconnectivity issues and how they can affect your infrastructure: If you do have to sync the service between clouds, use asynchronous systems as much as you can, pay for better connectivity when possible, and don’t let any cloud-specific service become a single point of failure.
7. Failover
Building reliable systems is a huge, cross-industry challenge today. Although books could be written on this challenge, here are a few rules of thumb:
- Active: Active. Keep all of your clouds running, even at minimal capacity, all of the time, ensuring integrations and data pipelines are in place. Not all your clouds need to be in full scale all the time, but there is a huge difference between scaling and starting up.
- Practice failing. Periodically force one cloud to take over for all the rest. This practice will allow you to face, in a controlled setting, the problems you unknowingly created. You can do it during off-peak hours and when all the relevant personnel is available. Some companies even make cool events out of it. As one of my favorite used to say: “Backup always works, it is usually the restore part that doesn’t.”
- Move your users around between clouds at random to see what new issues they might encounter. It’s best that those issues happen when you have time to help them, and not when you’re dealing with a crisis.
Summary
A recent survey shows that 20% of enterprises employ multiple public clouds in 2017, up from 16% in 2016 (a 25% growth rate). I hope you now understand why enterprises choose to pursue a public/public multi-cloud strategy: Avoiding vendor lock-in, eliminating single points of failure in your architecture, adhering to customer business requirements, and more.
Choosing whether or not multi-cloud is right for you is not an easy decision since it has repercussions on your entire IT infrastructure as well as personnel and budgets. Some of the points to consider carefully are the costs of inter-cloud data transfer, re-architecting of applications backups and failover procedures, and financial complexity.