Incremental implementation of Microservices

Incremental implementation of Microservices

This article is written after reading an excellent one: Why Not Start Modular Then Go Micro by Richard Fisher. The referred article recommends avoiding implementing a full microservices infrastructure from the very beginning but rather start with services encapsulated as local components and then transition to a distributed architecture.

The proposal is reasonable and matches my own way of working, having an agile approach in mind: We only have to build the infrastructure we need for each sprint, so the deliverables are released in shorter development cycles.

Scalability and high performance are certainly not the best candidates for the first development stages. This is not contradictory with the fact that we shall consider them from the initial design stages.

The following article aims to describe an incremental way to design and implement a microservices-based system and explain the direct and collateral benefits of this approach.

Starting with a minimal SOA architecture

Traditional client-server and Service-oriented Architecture (SOA) systems will have, at minimum, a service consumer and a service provider, as depicted below. All the internal detail of the service provider is hidden intentionally in the following diagram. The unique explicit components are the REST API at the server and the HTTP interfaces at both sides, as they will be used across the article. However, the same analysis can be done in other distributed implementations like SOAP, CORBA, or DCOM.

No alt text provided for this image

Transition to a microservices architecture

A microservices architecture becomes evident whenever the service provider implementation is divided into smaller and loosely coupled components, each of them exposing an API and deployed separately, among other characteristics.

As transforming a traditional monolithic architecture into a microservices architecture (MSA) may not be a trivial task, we can go progressively toward full compliance by dividing responsibilities as isolated components that look like microservices thanks to a RESTful API, as shown below:

No alt text provided for this image

This design already worked out the isolation of services and can arguably be called microservices, although they are not deployed separately but together on a unique web server. However, we can find several benefits on it:

  • Faster to deploy for certain pre-production purposes, like functional testing, where performance, scalability, and deployability are not considered.
  • Facilitates local debugging directly from the developer's IDE.
  • Deployable to a server on-premises, either for standalone or hybrid usage.
  • Wrapped as a desktop or mobile application reusing server components in a monolithic fashion.

There is no limitation for adding more microservices to the unique server for the purposes mentioned above, as in this expanded case:

No alt text provided for this image

Infrastructure growth

As more microservices are added, the infrastructure starts looking like a real microservices-based architecture. Even in that case, there may be some benefits in keeping two or more microservices together while some other services are promoted to fully isolated components like presented in the following diagram:

No alt text provided for this image

In the example above, the microservice "D" is replicated in two instances for showing additional benefits of the MSA for supporting highly scalable scenarios.

For cloud systems, it is recommended to represent the whole infrastructure through a centralized gateway or load balancer, or a combination of both. Current examples are Azure Application Gateway and AWS Elastic Load Balancing.

Traffic considerations

In the last diagram, microservices "A" and "B" are deployed into a single server instance. This scenario is beneficial in the case of two services with high affinity and message traffic, despite the economic and administrative benefits.

In a single-server implementation, the microservices talk to each other through the communication layer (called HTTP Server) using the REST API's. This layer shall be able to attend inter-process calls containing the same information as an HTTP call:

  • URL
  • Verb
  • Message headers
  • Message body

For a multiple-instance infrastructure, the microservices communicate via standard HTTP messages containing the same information as above.

A gateway instance may be useful for isolating the communication inside the cloud (inter-service communication) and give it more privileges than in client-server communication. It is not unusual to have some microservices that are only intended for a protected "internal" use, and this requirement can be attended by the gateway.

The last diagram depicts the two service-to-service communication scenarios just described, where the client is not involved:

No alt text provided for this image

Implementation path

Based on the different alternatives described above, we can propose an implementation path starting from a monolithic server application and ending up with a full replicated and auto-balanced cloud farm, as depicted in the following chart:

No alt text provided for this image

Using this chart as a roadmap, it is important to jump to the next step at the proper time. If too early, it will create financial stress. If too late, it will cause operational issues.

The development team and other associated technical teams like QA and Operations need to navigate this roadmap in a synchronized manner, learning from field experience and self-educating gradually on how to handle more complex scenarios.

To view or add a comment, sign in

More articles by Jaime Olivares

  • Why Healthcare needs a smarter Integration Backbone

    Maybe the longest-running myth of healthcare IT is that integration is a solved problem. Any company that's ever…

    2 Comments
  • Why do many Healthcare IT projects go wrong?

    Introduction The following article describes several factors that affect Healthcare IT projects from a technical…

    1 Comment
  • Applied FHIR Consent

    This article continues my previous one, "Authorization through FHIR Consent," published in 2018. I will focus on…

    2 Comments
  • Directorio unificado de ciudadanos

    El presente artículo tiene como objetivo proponer un modelo de datos para unificar la información de ciudadanos…

  • Test-aware microservices (deep testing)

    The following article describes a specific approach for testing single operations and complex workflows in a…

    4 Comments
  • A.I. as a commodity

    AI today Artificial Intelligence (AI) is a trending topic nowadays. Although still in the early stages of its…

  • Feature/infrastructure balance in software development planning

    The following article aims to describe criteria usable in software development planning, based on certain building…

    1 Comment
  • Long-running task infrastructure with Azure Functions

    It is a common scenario a system that needs to run long-lasting tasks, which are somehow antagonists of RESTful APIs…

  • Authorization through FHIR Consent

    This article belongs to a series dedicated to Healthcare IT platforms. The following describes the complexities of…

  • The Efferent Healthcare Platform

    This is the second article of a series dedicated to Healthcare IT platforms. The following aims to describe the…

    2 Comments

Others also viewed

Explore content categories