(5) Software Defined Everything: Orchestrating Microservices into Full Scale Government solutions
So far we learned what is the role of microservices and why are they bringing new, more resilient and more adaptable way of building solutions. We are creating self-sustainable, business encapsulated, loosely coupled units of work that are connected by the well-defined interfaces with other microservices. But many questions are still valid here. How do you make them all work together? How do we connect and orchestrate them into bigger, for the end user more meaningful Services?
Developing individual microservices should not be a hard thing to do. But what when we need to develop an end user service that spans boundaries of individual microservices, where logic of orchestrated interactions is more complex and where business process that we are solving is not limited to a single microservice in one, but more organizations? We rarely have a microservice that is fully executing end user intention, and if we do, we should probably reconsider this microservice into more of services - probably we just wrapped the legacy thinking in microservices approach and we developed next gen monolith application.
Why is this so important?
Government processes are, by definition very complex and span through multiple organizations and systems. Connecting that execution into single flow is always very difficult - and level of difficulty grows are we are trying to downscale the complexity by using microservices. Instead of few, we suddenly have multiple connection points and micro-orchestrations that need to be executed in proper order to achieve the end goal.
How to orchestrate complex Services?
To "orchestrate" workflow between microservices we have several different approaches:
- Orchestrating microservices: we have a centralized authority that drive the process flow and orchestrate from beginning to an end (successful or not)
- Choreographing microservices: we distribute work to the microservices and let them work out how to finalize and return their part of the work
- Remoting microservices: approach that is sometimes described as a services gateway,
Orchestrating Microservices the "Old Way"
Using this approach we usually have some kind of "master" orchestrator who have a description of the flow and orchestrate specific microservices to perform a specific function. This is very common approach for the complex, Government orchestration, where we keep the flow description (business process) described in a specific language and store in a process repository. Process is often described in a way that it have multiple asynchronous and synchronous calls to the microservices, and master orchestrator keeps a track where customer request is in the process flow. Usually, parts of the orchestration are configurable so that we can, at the run-time, partially configure how the execution will look like, but most importantly, we can influence the thresholds or parameters of specific microservices execution, using something like a rule-engine.
We commonly find this approach in microservices that are dealing with the services parameters (like, if number_of_years is bigger than 67 then we can reroute request to the specific service that deals with the senior citizens). They are part of the one or several orchestration monoliths that exist in shared services centers and they orchestrate business processes across single (rarely multiple) Government entities. At the same time, they are mostly underutilized, given that business processes and their mapping to software services are poorly described and still kept in the siloes of specific Government organizations. Building shared orchestrations means that your IT systems must be developed in a specific resilient way, and if just one participant in the orchestrated chain is not ready to support the orchestration in that way - the whole concept will be broken.
For that approach, wise thing would be to look at the tools that have well developed Business Process modeling software - bringing some benefits (like a central orchestration if we want that, but also probably easier understanding what is the current state of the specific process instance. It usually work well for the synchronous orchestration but it has many disadvantages if you are trying to asynchronous point to point calls between services. Most of the issues are related not to the successful execution - major issues arise if you want to properly execute failed transactions.
example: Orchestrating the UBER style services (from the NGINX.COM web site)
Choreographing Microservices as Dancers
Instead of doing everything in a fully orchestrated way, we could design the system in a more asynchronous manner: master service (not really an master orchestrator) would emit an event that starts the process - and all subscribed microservices to that one will execute and respond. Obviously, this approach is more aligned with the basic principles of microservices, since it is very decoupled. Systems like this are more flexible and adaptable, but you need to build some kind of monitoring system that can track what is happening at the specific process instance - not really orchestrating but controlling and monitoring. In many ways, this is just like good old Event Driven Architectures, but with more control and monitoring given the nature of microservices.
From the Government perspective, this is, at least at the first look, not really a natural way to do things. In Government, things are usually well orchestrated and conducted (I am not discussing quality of the process itself) and they want to understand immediately where the process is and what is the status. In practice, we don't have this often, given that we have some many manual or manual-copied-to-digital processes that don't really introduce any type of modern architectures.
example: Coreographing the UBER style services (from the NGINX.COM web site)
Remoting Microservices: The API Way
Working with multiple microservices is not an easy thing not just from the complexity perspective but also from the perspective of performance. Calling many services as a part of the single task or orchestration can be complex - for example, one web page could initiate calls to more than 100 different microservices, and the performance of the calls is very dependent on many factors (like bandwidth or latency), driving not so good end customer experience. People do expect that today, everything works smoothly, not really caring about the cost or complexity that you need to take care of to provide services that are expected.
This problems is usually solved by creating some kind of "proxy" or "gateway" that sits between "the requestor" and microservices and then translate on simple request into the multiple request that are calling microservices - basically orchestrating number of services (lets say 100) to smaller batches (lets say 5). Each batch basically depends on the availability or grouping of microservices close to the proxy - it does make sense to group services that are usually used together or which are geographically close. Then, initial request only calls a limited set of services and they are "remoting" those to the others that will execute that as a part of the "bigger" request.
This is very interesting for Governments where we have multiple service (or shared service) providers. One of them can initiate multiple requests toward other providers and they execute their part of the request, orchestrating multiple microservices to perform a specific job. For example, one of the providers can be Ministry of Interior, that will perform a data request toward multiple databases (systems), collecting the data and forming the response. Thus, main requestor can ask for services only by passing specific identified (for example, licence plate or personal ID) and receive a plethora of information that will be presented in an uniform way.
Orchestrating is THE Future
Whatever we have as a final solution here, we know that proper orchestration of the microservices is the future of building services (here: Government services). First, we need to decouple and minimize services footprint, making them resilient, self-sustainable units of work. Then we need to think how we group them back into meaningful processes, that sometimes span multiple agencies or entities. Finally, we need to think about how we are going to orchestrate them, creating the ecosystem of business processes, their mapping into the execution flow, monitoring and control of the same thing, and making sure that those flows are - flawless :).