Micro-services architecture
Each chef is charged of a single task.

Micro-services architecture

What is micro-services ?

According to Wikipedia,

Micro-services is a software development technique that structures an application as a collection of loosely coupled services. The benefit of decomposing an application into different smaller services is that it improves modularity. This makes the application easier to understand, develop, test and become more resilient to architecture erosion.
It parallelizes development by enabling small autonomous teams to develop, deploy and scale their respective services independently.

So micro-service is a small and fine grained restful web service that adopts the single responsibility principle. This architecture is based on modular programming, which makes it an alternative for monolithic architecture.

How ?

A monolithic application proposes to put all the business logic in one module, which requires the team members to stick to one technology, besides they need to re-deploy the whole application whenever one of its parts is modified.

Micro-services comes as an alternative in a way that it divides the application into different modules, each can use different technology and only the one to which belongs the modified part needs to be re-deployed, while the other APIs still available and responsive.

Micro-services architecture:

The main architecture is consisted of a set of rest APIs where each one of them is associated to its own database and is independent from the others.

How the communication between them happens?

To ensure the interaction between microservices or between them and a client server, it is essential to have a reference that holds the running microservices' addresses.

This way whenever there is a coming request, the reference is checked to find the appropriate instance to which the request will be routed.

The solution of hard coding all the addresses isn't efficient. Hence comes the Discovery service which takes charge of this task.

Discovery service:

Discovery service is composed of a service registry, which is a database holding information about available instances, e.g: details about their connection status and their IP addresses.

How these data are stored?

When a microservice is bootstrapped, automatically its address and its data is registered to the registry service via one of these two patterns; self-registration pattern and third-party registration pattern.

In self-registration pattern, client registers itself and sends after each fragment of time a heartbeat in order to inform service registry that it is still up and running. Its deregistration is done whether by sending a request for this purpose, or automatically when the timeout expires and no heartbeat is received.

In third-party registration pattern, micro-services stick to the single responsibility principle. In fact this pattern introduces a third party server called registrar, which is responsible on registering instances, unregistering them and performing health-checks all along their functioning. Health check means checking the instance's status, e.g: its connection to infrastructure services.

So once registration is done via one of these patterns, the registry becomes the reference to be checked whenever there is a coming request that needs to be routed.

But who does the check?

This actually leads us to another element of micro-services' architecture; API Gateway.

API Gateway:

Let's start first by defining it:

An API gateway is an API management tool that sits between a client and a collection of backend services.
An API gateway acts as a reverse proxy to accept all application programming interface (API) calls, aggregate the various services required to fulfill them, and return the appropriate result.

source.

A reverse proxy is a type of proxy server.

A proxy server is a machine that plays a role of an intermediate between a client and a backend server. It transmits coming requests to the appropriate address and forwards back the results. It is useful because it provides multiple functionalities like security.

On the other hand, a reverse proxy is a proxy server that forwards client's requests to a backend application composed of multiple servers. It is known as the network's traffic cop.

According to this definition, API Gateway represents the gateway through which all coming http requests are delivered to backend services. In other words, it comes on the top of the architecture to represent the entry point of microservices architecture.

API gateway manages coming requests either by redirecting them to the appropriate backend service or by fanning them out to multiple services and aggregates the results.

It also provides protocol translation. Actually, since each REST API is able to use its own communication protocol which may not be web-friendly, API Gateway takes charge of communicating with it via its specific protocol. And also, it interacts with the web client basing on its web-protocol.

Moreover, API Gateway plays a major role as an entry point to backend services because it provides clients with custom APIs that are tailored to their needs.

Besides, it has other responsibilities such as authentication, monitoring, load balancing etc.

How it works?

It uses one of these two patterns: server-side discovery pattern and client-side discovery pattern.

In server-side discovery pattern, the API Gateway takes charge of requesting the microservice instances via an integrated load balancing algorithm. It queries the service registry and takes charge of choosing the right service to which it redirects the client's request.

In client-side discovery pattern, the client handles the task of load balancing to a separate load balancer, so all what an API gateway does is querying the service registry and then it's up to the load balancer to handle the task of choosing the appropriate instance or instances that are able to process the client's request.

What is a load balancer?

Load balancer:

Load balancer can be a physical device or a virtualized instance and it's located between front-end and back-end applications.

It acts as a reverse proxy; its role is to evenly distribute incoming network traffic across multiple servers. In other words, it brings optimal and efficient utilization of resources by reducing work-load on individual servers.

Furthermore, it guarantees scalability by detecting servers' crashes and forwarding coming requests to the ones that still up, besides, when there is a newly added server it detects it automatically and starts sending it newly coming requests.

When to use this architecture:

Microservices is an architecture that is newly invented (it was premiered in 2011) and it came in order to adopt to the progress that software applications are witnessing; they're becoming increasingly complex and difficult to manage as users' needs are evolving and becoming more and more dependent on these digital solutions. 

Netflix was among the first companies which went through the process of transforming their application's infrastructure from monolith to microservices. They started this transformation in 2009, to announce to the world later in 2011 their success in adopting this new architecture and they open-sourced their infrastructure services as OSS Netflix.

Since then, people in IT industry started focusing more on more on microservices for it represents an efficient solution that guarantees essentially applications' scalability, delivery speed and rapid test processes.  

However not every company can find this architecture as a good fit to their programs. This depends on the nature of their programs themselves and on their clients' needs as well.

In fact, Microservices are a good fit in case:

  • The application needs to be broken into independent API services, where each one of them needs to implement different language and uses different storage type.
  • The application is continuously growing. By time, it becomes complex, its code becomes highly coupled and interfered. So, it is required to break it into small and independent pieces of code to facilitate future changes, consequently, easily test them and deliver results in the shortest delays.

To view or add a comment, sign in

More articles by Henda Farhani

  • Microservices centralized configuration by Spring Cloud and state changes' broadcast via Spring Cloud Bus and RabbitMQ.

    This article explains: How to create a centralized configuration for microservices' instances. How to notify those…

  • Docker for beginners

    Let's start by defining Docker It is a runtime environment designed to create, build and manage applications in…

    4 Comments
  • Redux

    Redux is an open source JavaScript library for managing application state. It was created in June 2015 and it was…

    2 Comments
  • Change detection Angular 2

    You may have read that in Angular2 you must apply the single responsibility principle per component. For this reason…

    1 Comment

Others also viewed

Explore content categories