As I got the question more frequently during last weeks, let me document in one place why it is useful to have 2 orchestration layers in Deployment Automation, the roles they play, and typical technology to use for each. This applies to Ops2Dev that I am pushing on Infrastructure / Platform IT, but also to the wider domains of Business IT.
There are two typical problems to address when developing and deploying Platforms automatically, with even more emphasis when your organization is acting as a Service Provider, to provide managed platforms to entities in your Enterprise or to clients, like Ops2Dev aims at:
- Integration of provisioned resources and of deployed components into Enterprise Management = it is not sufficient to create and to install, this is maybe only 20% of the task ... We have to provide Managed Services on them, and so to connect or to integrate them with Service Management, System Management and processes for delivering the services (compliance, healthcheck, monitoring, patching, backup, userid management, authorization/roles, manage routing, security flows .. etc ..)
- Reusability of building blocks (Patterns) and composition sequences (Applications parts) = you want maximum reuse of the technical components and of the technical solutions using them, for industrialized installations aligned with your best practices, for optimizing your Managed Services quality and efficiency, for speeding up Application development and build, for saving on costs and run .. However, you are facing a significant variability in Enterprise Management tools and processes to integrate with bullet #1, while ranging across the entities or clients you serve.
You need something which can be stable and so largely reusable on the technical side, while adapting to the context, and to the entity or client specifics, in which they are provided and managed. Or said otherwise, you solve the same technical problems and provide the same functions across those deployment and service contexts, so you do not want to redo / re-code each time you change the context, and you cannot plan and parameterize for all the richness and diversities out there in those contexts.
A good answer is to split the Deployment Automation orchestration layer in two pieces, in a typical good Architecture practice of decoupling. That has worked well for me and others in now a significant number of Cloud solutions, and this is bound to have even more value in coming times where we are raising the level of automation to Platform and to interfaces with Applications and DevOps, where variations are exponential:
- Enterprise orchestration, based on a BPM (Business Process Manager) or ITSM Service Catalog type of orchestration, for integrating deployed contents and resources with varying processes, service management, system management ... specifics and context.
- Technical orchestration, for reusable patterns themselves, based on abstract technical sequencing and composition engines like VMWare vRA/vRO, IBM UrbanCode Deploy ... to only cite two that I know. They need to be abstract enough to not link with a particular deployment technology (Puppet, Chef, Ansible, Terraform, OpenStack ..), nor with particular targets (private cloud, public cloud, KVM, vSphere, Citrix ...).
In short, the first adapts to the wide external variations of client and entities, and the second provides the functions and components to deploy independent from the locations where they are deployed.
Typical characteristics of each are:
Enterprise Orchestration (BPM / ITSM Service Catalog)
- Process and Service view
- Integrate with specific Enterprise Management systems = CMDB, Change Mgt, trigger of parallel Run processes on new resources = Health check, batch monitoring, capacity mgt ..
- Point where underlying Service Providers hook their own Delivery processes & Systems, for Ops of deployed patterns (there can be several SP in a Cloud solution and service, and requiring them all to align with your processes and tools would be wrong and inefficient)
- Audit point, Approvals, management of quotas
- Integration point of Manual and Automated parts, allowing transparent and progressive Automation (this one is very important, you never automate 100% on day 1 ...!)
Technical Orchestration (abstract technical sequencing and pattern composition engine)
- “Technical glue” representing the stable technical knowledge and best practices to deploy and manage a Service or an Application.
- Compose and sequence the different Technical Patterns to deploy a Service or Application, with variability in topologies by environment types, in scaling policies ..
- Versioning, guarantee relations between Patterns versions, Application sequences, and deployment environments, and e.g. that only validated versions are deployed to Production
- Modular to allow recursive composition and reusability : more complex patterns of previously developed simpler patterns, themselves composed patterns or atomic patterns = units of reuse
- Rely on Broker (a separate function / component) to place workloads where they comply with policies while optimizing costs
- Idem-potency: complement what already exists, provision new resources only if not already existing. I.e. avoid the syndrome of deploying a new VM each time a new Application has to be deployed !! 80% of the time should be just adding some components on existing Platform components and using them more, without creating a VM .. so obvious in the world of containers, but not for people cabled with VMs it seems 😔.
As one can see, they have different technical characteristics and purposes, and solving those two domains with only one technology is rarely efficient: each one is not adapted to handle the other.
Let me know if you have complements to that picture, or see other aspects to consider in addition ⌨.