Delivering in a fully cloud deployed environment

Delivering in a fully cloud deployed environment

Delivering change in an all Microsoft Azure environment has illustrated the fundamental differences of being fully cloud deployed versus within our own data centres.  In no order here are some of the key differences I have observed.

Provisioning infrastructure is like a free bar (with the associated hangover)

The speed and simplicity of a cloud platform for provisioning new assets needs greater scrutiny and governance processes than historical "on premises/ hosted" implementations. 

Historically, procurement of physical servers acted as a natural check and balance against over enthusiastic Virtual Machine (VM) provision - once the physical machine was at capacity, no more could be provisioned. In a cloud deployed model, the all you can eat buffet of processing capacity can give you serious financial indigestion if appropriate governance isn't applied, with provision of new services going unchecked.

You need to change thinking on virtual machine availability, workload and sizing

When moving virtual machines from legacy hosting environments to the cloud, relying on legacy sizing metrics can be unreliable. Sizing need to be reviewed post migration to the cloud, from a compute, availability and workload perspective, with particular focus on finding a cost optimised balance.

The "on-consumption" charge models of cloud providers changes the economic dynamics of "always on" deployments. In the old world, having a batch server available 24/7 (for processing insurance renewals for example) was not financially an issue, even if the machine was only doing work for a couple of hours overnight. In the cloud world this is a very expensive way of operating, as you are paying for idle machine consumption.

The role of the infrastructure architect is changing

A move to the cloud results in a reduced focus on hardware domain knowledge within the organisation. A consequence of this is it blurs the distinction between the infrastructure, application and solution architect roles.  Moving to the cloud has led to an increased focus on networking (both physical and virtual), as well as incorporating other factors such as projected consumption, and associated run cost calculation into the domain of the infrastructure architect role.

The cloud changes the economics of change and run cost models within a corporate IT function

Traditionally most projects deliver new infrastructure like servers as a single upfront capex cost, with the ongoing run cost such as power and security relatively minor once the infrastructure was deployed. Cloud consumption-based charging models change this; instead there is minimal initial capex out lay, but a higher ongoing operational run cost to be considered. This can be less attractive from a financial treatment point of view, as it impacts depreciation and return on investment calculations.

The CIO function needs a cloud data analyst role

Because cloud charging is largely based on consumption, a new role is emerging within a CIO function. This role combines data analytics and solution architecture.  Excellent tools like PowerBI, when combined with the Azure Consumptions insights, give a wealth of information on consumption within the cloud. To effectively manage a large enterprise cloud deployment, the role needs to be proactively analysing this data to understand how to optimise each instance from several perspectives:

  • Availability - is the resource available for the right processing windows - does it really need to be available 24/7?
  • Performance - is the resource over or under specified for the workload being performed on it? (over specification of resources was a recurring theme with resources we migrated from legacy data centres)
  • Usage - is the resource being used? (Testing environments consume significant resources but are not permanently in use, and provide an opportunity to reduce consumption through better management)
  • Cost - is the true cost of the resource understood by the business?

Tagging resources in the cloud

Key to effective management of resources in the cloud is a defined and consistently applied tagging strategy. This is particularly important for supporting business unit chargebacks for consumption. Common challenges faced include:

  • Mistakenly aligning tags to current organisational structures - Financial services companies restructure and realign business divisions all the time, a tagging structure needs to be able to support these events without triggering the need for large wholesale retagging activity
  • Losing the business context of the service - Commercial enterprise package applications consist of multiple technical components. For example, a policy administration system might consist of a document production system, a rating engine, multiple databases etc. The tagging approach needs to not only reflect each component, but the larger part that component plays in a business context.
  • Avoid grouping too much into single resource groups - From a management perspective it makes sense to minimise the number of resource groups, however from a business charge back perspective this complicates matters. I have seen that given the granularity of cost and usage information available, business users (who ultimately pay the bill) are demanding a greater insight into the true cost of them doing business.

Package vendors are unclear on their cloud strategy

Commercial package vendors are moving to software as a service offering. This doesn't necessarily align to wider corporate strategies of migrating to the cloud. This is a challenge to businesses in the financial services industry, where core systems are rarely used in isolation, and access/enrichment of data from multiple systems is key to competitive advantage. A case in point, one of core policy administration systems is provided as a hosted software as a service model. We pay a license to use the software (and vendor datacentre for storage), pay to download our data back into our cloud so we can use it with other corporate applications that we do host, and pay to store the data ourselves. On top of this the vendor uptime and other SLA's are substantially poorer than Microsoft's. Not great economics.

In addition, many vendors who are offering "host in your own cloud" solutions, are effectively repacking traditional data centre deployments - i.e. Stand up 10 VM's of this specification to run our software. This is not necessarily an optimised deployment model from a cost and run perspective. The few vendors that are providing true architected for the cloud / server-less solutions, don't yet seem to have a clear deployment model for how to host (and provide maintenance in a customer tenant). Level of admin / privileged access required and adherence to customer change control processes are common blockers.

Architected for cloud solutions are the future

Whilst there is much talk on the benefit of the cloud, it's unclear how much of its full potential is currently being realised. Recently, we deployed our first server-less solution, using several "only in the cloud" technologies - Azure Search, CosmosDB, ASP Core, and Blob Storage to create a cost-effective legacy data archive solution. This has delivered significant financial benefits and provided a faster and more user-friendly solution than retaining data in a myriad of legacy systems. More significantly it has demonstrated the true opportunity the cloud presents when you design and build a solution for "on demand" usage from scratch.

"is the true cost of the resource understood by the business?" Hit the nail on the head!

To view or add a comment, sign in

Others also viewed

Explore content categories