Cloud Agnostic – The hard path to the Cloud

Cloud Agnostic – The hard path to the Cloud

I worked with many groups at my last company, helping them get their projects into the cloud. I looked at their existing architecture, and created a cloud based architecture and migration plan for them. Then the meeting happens, we go through the new architecture and discuss the differences. Many times, the next thing I hear is “That’s great, but we want to be cloud agnostic”. So we move forward, get the infrastructure setup, they forklift their systems to instances running in the cloud.

Invariably, a few months go by and I get a phone call from the project lead or the business manager from the group. The project is running fine, but costing a huge amount of money in engineering and cloud resources, and they need to cut back. I go back on the program for a few weeks and get their cloud costs down to something more reasonable. 

So what happened (or didn’t happen)? 

  1. AWS Well-Architected Framework best practices were not completed, optimization practices were ignored. The engineers needed to get their systems working as quickly as possible. On small or medium machines, things ran slow – their solution, use the biggest/fastest instances and storage and things ran great. No one to go back in and optimize (until my phone started ringing due to the cost). In their defense, their motivation was to get things working quickly. Easiest path, in terms of cars - more horsepower not less wind resistance. 
  2. Focus on the wrong problem. This is pretty interesting when you think about it. If you haven’t done so, go read “Are your lights on?” by Gerald M. Weinberg then come back and finish this article, great book and it won’t take long.

What is the problem that needed to be solved? It was an analytic problem, not an infrastructure problem. By forcing ‘Cloud Agnostic’ as a key quality attribute of the system, that forced the team to become expert at deploying cloud agnostic systems, yet the core competency of the team was in the analytic problem they were trying to solve. However running on multiple clouds was not an immediate requirement, meaning it may never happen. So up front time, energy, and money are invested in creating and administering scalable solutions - message queues, databases, load balancers, etc. that could be ‘fork-lifted’ to different clouds. Those resources could have been spending time on solving the analytics problem while leveraging the managed systems of the cloud provider.

 If you are going to roll your own rather than leverage what is tested and proven in the cloud, make sure that is the problem you want to solve. Instead, leverage the cloud - leverage as many of the cloud provided services available up front and get the core problem solved. Then, in the future, if the requirements change and running on another cloud becomes a necessity, move the analytic solutions and leverage that clouds’ services. Much less code to manage, much less complexity introduced, faster to project completion, and most likely less money spent.

This isn’t to say that there aren’t plenty of architectures that require multiple cloud providers for optimal results. But let your team solve the problem they excel at and let the cloud provider handle elasticity, scalability, managed services that you can leverage etc. via their services, services in which they excel.

Love the article, Neil. This is exactly why MongoDB created Atlas. Run on any cloud provider, put data where you want it and unleash your brilliant minds to focus on innovation instead of managing databases. 

To view or add a comment, sign in

Others also viewed

Explore content categories