Incremental implementation of Microservices
This article is written after reading an excellent one: Why Not Start Modular Then Go Micro by Richard Fisher. The referred article recommends avoiding implementing a full microservices infrastructure from the very beginning but rather start with services encapsulated as local components and then transition to a distributed architecture.
The proposal is reasonable and matches my own way of working, having an agile approach in mind: We only have to build the infrastructure we need for each sprint, so the deliverables are released in shorter development cycles.
Scalability and high performance are certainly not the best candidates for the first development stages. This is not contradictory with the fact that we shall consider them from the initial design stages.
The following article aims to describe an incremental way to design and implement a microservices-based system and explain the direct and collateral benefits of this approach.
Starting with a minimal SOA architecture
Traditional client-server and Service-oriented Architecture (SOA) systems will have, at minimum, a service consumer and a service provider, as depicted below. All the internal detail of the service provider is hidden intentionally in the following diagram. The unique explicit components are the REST API at the server and the HTTP interfaces at both sides, as they will be used across the article. However, the same analysis can be done in other distributed implementations like SOAP, CORBA, or DCOM.
Transition to a microservices architecture
A microservices architecture becomes evident whenever the service provider implementation is divided into smaller and loosely coupled components, each of them exposing an API and deployed separately, among other characteristics.
As transforming a traditional monolithic architecture into a microservices architecture (MSA) may not be a trivial task, we can go progressively toward full compliance by dividing responsibilities as isolated components that look like microservices thanks to a RESTful API, as shown below:
This design already worked out the isolation of services and can arguably be called microservices, although they are not deployed separately but together on a unique web server. However, we can find several benefits on it:
- Faster to deploy for certain pre-production purposes, like functional testing, where performance, scalability, and deployability are not considered.
- Facilitates local debugging directly from the developer's IDE.
- Deployable to a server on-premises, either for standalone or hybrid usage.
- Wrapped as a desktop or mobile application reusing server components in a monolithic fashion.
There is no limitation for adding more microservices to the unique server for the purposes mentioned above, as in this expanded case:
Infrastructure growth
As more microservices are added, the infrastructure starts looking like a real microservices-based architecture. Even in that case, there may be some benefits in keeping two or more microservices together while some other services are promoted to fully isolated components like presented in the following diagram:
In the example above, the microservice "D" is replicated in two instances for showing additional benefits of the MSA for supporting highly scalable scenarios.
For cloud systems, it is recommended to represent the whole infrastructure through a centralized gateway or load balancer, or a combination of both. Current examples are Azure Application Gateway and AWS Elastic Load Balancing.
Traffic considerations
In the last diagram, microservices "A" and "B" are deployed into a single server instance. This scenario is beneficial in the case of two services with high affinity and message traffic, despite the economic and administrative benefits.
In a single-server implementation, the microservices talk to each other through the communication layer (called HTTP Server) using the REST API's. This layer shall be able to attend inter-process calls containing the same information as an HTTP call:
- URL
- Verb
- Message headers
- Message body
For a multiple-instance infrastructure, the microservices communicate via standard HTTP messages containing the same information as above.
A gateway instance may be useful for isolating the communication inside the cloud (inter-service communication) and give it more privileges than in client-server communication. It is not unusual to have some microservices that are only intended for a protected "internal" use, and this requirement can be attended by the gateway.
The last diagram depicts the two service-to-service communication scenarios just described, where the client is not involved:
Implementation path
Based on the different alternatives described above, we can propose an implementation path starting from a monolithic server application and ending up with a full replicated and auto-balanced cloud farm, as depicted in the following chart:
Using this chart as a roadmap, it is important to jump to the next step at the proper time. If too early, it will create financial stress. If too late, it will cause operational issues.
The development team and other associated technical teams like QA and Operations need to navigate this roadmap in a synchronized manner, learning from field experience and self-educating gradually on how to handle more complex scenarios.
I have appended the section "Implementation path".
New article related to #microservices : https://www.garudax.id/pulse/test-aware-microservices-jaime-olivares
Great article! Thanks for sharing!