A Guide to Microservices using .Net 5
Microservices is a great architecture for modern applications that need to be scalable and resilient to failures in order to be able to support millions of transactions.
In order to understand why the microservice architecture can be better than monolithic, it is needed to understand the drawbacks of monolithic applications.
The drawbacks of monolithic applications
Monolithic applications tend to grow over time and they become complex applications which makes them harder to maintain and hard to debug and fix bugs. Implementing new features becomes difficult and time-consuming.
They also make continuous deployment time-consuming because they need to be redeployed even if small changes are made.
Another problem with monolithic applications is that the entire application is running under the same process, having one bug in one of the modules may crash the entire application.
Microservices to the rescue
To understand how microservices overcome the monolithic drawbacks we need to understand the advantages.
Although microservices have a lot of advantages vs monolithic applications they also bring a lot of new challenges that are needed to address in order to have them work properly.
This guide will show what are those challenges and how they can be addressed using open source tools.
The Microservice
Microservices are small units that are specialized in completing one process. They are connected to their own fiscal or logical database. No other microservice should not have access to another microservice database.
This feature of microservice ensures that the microservices are loosely coupled.
This is because modifying the database of one microservice should not affect other microservices. This helps developers to work independently from other teams.
In some cases where for some reason the service needs a special type of database the service can use the type of database that best suits its needs.
Microservice communication
Microservices can communicate over different protocols. The most common is HTTP because of its synchronous communication nature which means that the client asks for information and the server responds with the information asked.
Communication over HTTP
HTTP Communication is good for its asynchronous nature but it can have problems when communicating with other services due to several reasons like slow network connection or the service it is trying to communicate being temporarily unavailable.
For this reason, the system should implement some kind of retry before assuming that the service is unavailable, and before having to roll back all the transactions is better to try a couple of times.
Having a service completely unavailable causes latency for the clients, for a better user experience it is better to not make the client timeout and just show the client an error.
Circuit Breaker
The circuit breaker pattern is used in microservices to retry calls from one service to another making the assumption that the service may be down because of a slow network connection, a timeout, or temporal unavailability. Therefore retrying the call may solve the issue.
The scenario where the service being called is down for a longer period of time may result in low performance and bad user experience and depending on the scenario, it might lead to Cascading failures.
To solve this issue the Circuit Breaker pattern can be used, by defining a policy of retrying and what happens after the policy finishes. The circuit breaker pattern manages 3 states (Open, Closed and Half-Open)
In the Open state, the request will fail immediately because the policy already failed.
In the Close state, the request is sent as normal through the policy.
In the Half-Open state, some requests are allowed to pass, and depending on the response it will go to the Open state or Close state.
Case example scenario:
Step 1: In this scenario, the user makes a Deposit. Then it sends an event that a deposit has been made to the event bus, that is consumed by the History Service.
Step 2: After sending the event it calls the Account Service to add the money into his account but the Account Service is unreachable. The Circuit Breaker pattern first will try to communicate with the service several times just in case the Account Service is up again.
In case the Account Service can't be reached after the re-intent policy, the Deposit has to be sure to roll back the changes. In our scenario, it has to send a new event to rest the deposited money in the History Service.
In .Net 5 there is a library called Polly that let us implement the Circuit Breaker pattern
Microservices complexity
As with any architecture, it also was its own complexity.
In microservices, the major drawback is the complexity of having a distributed system.
In distributions systems like microservices where its own service has its own database, it is not possible to do joint tables in a conventional SQL query. Instead, the system needs to address this problem by creating individual services that will consult different services and then join the information.
Another problem is testing. Testing microservices is much more complex because it involves the communication between multiple parts of the system.
But the major problem of distributed systems as microservices is to try to keep data consistency between the different services. To address this problem the SAGA pattern was introduced.
SAGA pattern
The SAGA pattern is a solution to the distributed transactions to maintain data consistency.
When using the SAGA pattern with REST communications which is synchronous communication is challenging because of the temporally coupling between services that may cause inconsistency of data.
Imagine the scenario where service A calls service B and after that service A crashes. Service B will complete the transaction but service A will not, causing inconsistency of data between the services.
For that reason, it is better to use the SAGA pattern using asynchronous broker-based messaging.
Communication over Events (Event-Driven Architecture)
In an event-driven architecture, the system adds a new component called Message Broker also called Event Bus.
In this architecture, the communication between the microservices is done by events using the Message Broker.
For example, when a microservice updates its own business entity it may send an event to the Message Broker, then the message broker will send the event to other microservices,
The event-driven architecture enables the implementation of transactions that span multiple services and provide eventual consistency.
To know more about how to implement an Event-Driven architecture go to my other post.
SAGA using the message broker
When using a message broker will ensure that a SAGA completes even when its participants are temporarily unavailable.
The broker will be sending the message to the service until it gets processed which ensures data consistency.
SAGA Choreography
In choreography, each service is responsible to manage the event's communication directly with the Message broker. In this scenario, it is needed to implement the implementation logic through all the services involved in the transaction. Each service needs to know and implement how to manage the different scenarios of the saga.
This is good for small scenarios with less than 10 services. Then it becomes confusing and hard to manage transactions between the services.
SAGA Orchestration
In orchestration, there is a centralized part of the system that coordinates the distribution of the events. The logic of the saga transactions is placed in a unique part of the system which is good for maintainability and bad because it introduces a point of failure.
Client APP
In our scenario so far, the client app communicates directly with each of the microservices. The drawbacks of client apps communicating directly with the microservices are several.
Clients will need to know where each microservice is and with this, the ability of horizontal scaling that each microservice has will be complicated for the client to implement due to the fact that it is difficult for the client to discover new services.
Authentication and authorization are complicated to maintain because each microservice will need to do the correct validations.
Exposing all services to the internet will imply that more services can be compromised by hackers.
API Gateway
The API Gateway is the entrance to the microservices.
Using an API Gateway the system decouples the clients from the microservices. With the API Gateway the system delegates more responsibilities to centralize the system and this allows the system to implement:
Ocelot API Gateway
In .Net 5 there is an Open Source API Gateway called Ocelot that will allow us to implement an API Gateway
The way Ocelot works are based on a configuration file where it is indicated the external route and the internal route. The external route is exposed to the clients and the internal route is the container address. The internal route is where Ocelot will redirect the request.
The next problem that the system face is due to the fact that Docker containers are destroyed constantly and these containers are stateless which means that each time a new container is created, the container has a new Ip address. Another problem that this architecture face is that microservices can be expanded horizontally. That means that if for some reason one microservice needs to be expanded another container with the same microservice gets created.
The challenge with this is that the system needs a way to inform the Gateway when new containers are created.
This is when the service discovery pattern comes to the rescue.
Service Discovery
The service discovery pattern allows the discovery of new instances of our microservices. There are 2 types Client-side service discovery and Server-side service discovery.
The Client-side discovery is where the client is in charge to go to the Service Registry to search the routes and delegate the load balancing.
The Server-side discovery is where the back end side is in charge to search the routes in the Service Registry and it is in charge of load balancing.
The Service Registry
The service registry is an application that allows new instances of microservices to the registry. The Service registry will save the information of the new microservice instance.
It is also in charge of checking the service health and removing if they are not accessible anymore.
Consul as Service Discovery
Consult in an Open Source application that has several functionalities. One of the functionalities that it has is the Service Discovery
How it works
Once the microservices instances are created, they are in charge to call the Consul Server to the registry.
Note: The SQL is just a logical representation that Consul is storing the instances in some internal database and it doesn't mean that the user needs to prepare any Database instance.
After that Consult is in charge to call a Helth API to make sure the microservice instances are alive. In case one instance doesn't respond, Consult will delete the instance from its database.
Problem using only Registry Discovery
Imagine the situation where you have multiple instances of the same microservice because you are having a huge amount of traffic.
The problem is that even if the Security Microservice is registered multiple times the Gateway will be calling always the same service and not using the others. To solve this issue is introduced a Load Balancer to the system.
Load Balancer
A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications.
Fabio Load Balancer
Fabio is an Open Source Load Balancer that uses several algorithms to balance the traffic over the different microservice instances.
Fabio is built to work with Consul Service Discovery and makes the integration easier.
Introducing Fabio Load Balancer to the architecture now the system is able to balance the traffic to all instances equally based on balancing algorithms.
1.-The first step is having Consul (the registry discovery) work with the microservices to inform the instance availability.
2.- Then, once the client application makes a request it will go to the Gateway. (The Gateway will also check for authorizations and will check if the route is valid)
Recommended by LinkedIn
3.- Then the Gateway will redirect the request to the Load Balancer.
4.- The Load Balancer will query the Registry Discovery for the routes available.
5.- Then the Load Balancer will decide which service call based on the instances register on Consul and based on the balancing algorithm configured.
External Configuration
In microservices having a centralized configuration server is helpful because that helps the administration to manage the whole configurations in only one place instead of having to change the configuration file in each service.
One of the tools available for that is NACOS. NACOS is more than a configuration server but in this scenario, NACOS will be used only as a configuration Server.
Once initialized the NACOS server adds the following configuration:
Then in order to connect the application with the NACOS server and use the centralized configuration follow the next steps:
1.- Add the nacos-skd-csharp.Extensions.Configuration (1.1.0) nugget package.
2.- Add the configuration to the appSettings that will be used to connect the .net application with the NACOS server
3.- Load the NACOS configuration in the Startup.cs
4.- Now is possible to use the configuration from the NACOS server as a key value.
For example sql:cn is the reference to the connection string that exists in the NACOS server.
Distributed Tracing
Distributed tracing is a method to track the requests as they propagate through distributed systems as microservices.
It is helpful to monitor and analyze distributed transactions.
Jaeger
Jaeger is an open-source, end-to-end distributed tracing monitor.
To start the Jaeger server run the next command
docker run -d --name service-tracer -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 -p 55775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14268:14268 -p 9411:9411 --network micro jaegertracing/all-in-one:latest
In the microservices, it is needed to add Jaeger services and apply the configuration
"jaeger": {
"enabled": true,
"serviceName": "app-account",
"udpHost": "localhost",
"udpPort": 6831,
"maxPacketSize": 0,
"sampler": "const"
},
After using the services Jaeger is able to trace the requests and give you metrics
Jaeger also generates automatically the topology of your services
Log Aggregation
Application logs are the most useful data available for detecting and solving a wide range of production issues and outages.
In distributed systems like microservices, it is very helpful to have a centralized application to handle the logs, have an aggregation service for unified log storage, and provide the ability to analyze the logs using a query language.
Seq as the log aggregation service
Seq is built for modern structured logging with message templates. Rather than waste time and effort trying to extract data from plain-text logs with fragile log parsing, the properties associated with each log event are captured and sent to Seq in a clean JSON format. Message templates are supported natively by ASP.NET Core, Serilog, NLog.
To install Seq using docker run the next command
docker run -d -e ACCEPT_EULA=Y --name service-log -p 5371:80 datalust/seq:2021.3
Create an API key
Configuring Seg
To use Seq inside our microservices it is needed to install the Seq extension and add it to our pipeline in the Startup class.
Add the Url of the Sec service and the API Key.
install the package Seq.Extensions.Logging
Once it is configured you can inject the ILogger interface from Microsoft.Extensions.Logging and it will automatically send the logs to the Seq service where you can easily perform easy and complex searches as it is shown in the image below.
Monitor and Metrics
In any system, it is important to monitor the system and collect metrics, in the case of microservices, it is more important because collecting metrics helps to understand why the application is working in a certain way.
For example, if one service fails it can create a waterfall of failures, making other services start failing too. This will end up in a log hell where a bunch of services will start crashing and knowing the main issue is difficult.
Monitoring the applications helps to understand the overall health of the applications, this means that it is possible to avoid microservices failures before happen.
Monitoring and metrics can provide an early warning system for application deterioration or failure.
Also, it helps to insolate problems in our system, and by detecting bottlenecks is possible to analyze and optimize those issues in our system and maximize user experience.
Prometheus
Prometheus is an open-source system monitoring application.
Prometheus can collect and store up to 5000 types of metrics.
In order to have good monitoring, it is important to analyze and identify the metrics that the system requires to be collected. The system admin needs to identify those metrics needed for the kind of system he is working on.
A common Prometheus architecture is the following:
Where it is possible to observe that Prometheus is a server that pulls metrics using Jobs/exporters, it can store the information in local storage and the way to visualize and export data is done by its own Prometheus web UI or using Grafana.
Prometheus will use a local configuration file (promethues.yml)
The scrape_configs is the section where we indicate to Prometheus where it has to go to collect the information. Inside it, there are other sections called job_name where we indicate the target endpoint of our microservices.
# my global config
global:
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
# - "first.rules"
# - "second.rules"
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'service-metrics'
static_configs:
- targets: ['service-metrics:9090']
- job_name: 'app-gateway'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:5000']
- job_name: 'app-security'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:5001']
- job_name: 'app-account'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:5002']
- job_name: 'app-deposit'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:5003']
- job_name: 'app-withdrawal'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:5004']
- job_name: 'app-history'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:5005']
- job_name: 'app-notification'
scrape_interval: 5s
static_configs:
- targets: ['host.docker.internal:5006']
To run Prometheus using docker run the next command indicating the route of the configuration file.
docker run -d -p 9090:9090 --name service-metrics --network micro -v C:/docker/net5/prometeus/local/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus:v2.31.1
Inside the microservices, you will need to add some dependencies that will send metrics to Prometheus and add the configuration in the IWebHostBuilder.
<PackageReference Include="App.Metrics.AspNetCore" Version="4.1.0" /
<PackageReference Include="App.Metrics.AspNetCore.Health" Version="3.2.0" />
<PackageReference Include="App.Metrics.Formatters.Prometheus" Version="4.1.0" />>
After that Prometheus will create a connection with the microservices and will start to collect metrics.
Analytics with Grafana
Grafana is an open-source platform for monitoring.
Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored.
Grafana allows you to create, explore, and share dashboards.
Grafana works with different data sources like Jaeger and Prometheus. It is very simple to configure the connection with the Prometheus server by adding a new data source and choosing the Prometheus plug-in.
Once you configure the connection to Prometheus, Graphana allows the creation of graphics using the metrics collected by Prometheus. Example of graphics using Grafana
Running all the services needed
The next list includes the docker commands to run the extra services needed in a microservices architecture described above.
URLs
Consul: http://localhost:8500/ui/dc1/services
Fabio: http://localhost:9998/routes?filter=
Nacos: http://localhost:8848/nacos/#/configeditor?serverId=center&dataId=LOCAL&group=DEFAULT_GROUP&namespace=Aforo255&edasAppName=&edasAppId=&searchDataId=&searchGroup=&pageSize=10&pageNo=1
Jaeger: http://localhost:16686/dependencies
Grafana:http://localhost:3000/login (admin:admin1)
Prometeus: http://localhost:9090/targets (service-metrics) http://service-metrics:9090
LogsDistributed: http://localhost:5341/:
Docker-compose-databases.yml
The next file contains the databases used in the microservice architecture. They are placed in a docker-compose file.
To run the databases run the next command:
docker-compose -f docker-compose-databases.yml up
file:
version: "3.5
services:
mysql:
image: mysql:8.0.26
container_name: mysql-database
restart: always
environment:
- MYSQL_ROOT_PASSWORD=Aforo255#2019
ports:
- 3307:3306
- 33061:33060
networks:
- myNetwork
sql:
image: mcr.microsoft.com/mssql/server:2019-GA-ubuntu-16.04
container_name: sql-database
restart: always
ports:
- 1434:1433
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=Aforo255#2019
networks:
- myNetwork
postgres:
image: postgres:alpine3.14
container_name: postgres-database
environment:
- POSTGRES_PASSWORD=davidblog#2019
ports:
- 5434:5432
networks:
- myNetwork
mongo:
image: mongo:5.0.2
container_name: mongo-database
environment:
- MONGO_INITDB_ROOT_USERNAME=davidblog
- MONGO_INITDB_ROOT_PASSWORD=davidblog#2019
ports:
- 27018:27017
networks:
- myNetwork
mariadb:
image: mariadb:10.2.36
container_name: maria-database
restart: always
environment:
- MYSQL_ROOT_PASSWORD=davidblog#2019
ports:
- 3310:3306
- 33070:33060
networks:
- myNetwork
networks:
myNetwork:
name: micro"
Docker-compose-services.yml
The next file is used to run RabbitMq, Consul, Fabio, and Nacos services.
To run the databases run the next command:
docker-compose -f docker-compose-services.yml up
file:
services
rabbitmq:
image: rabbitmq:3.8.13-management
container_name: service-events
environment:
- RABBITMQ_DEFAULT_USER=davidmatablog
- RABBITMQ_DEFAULT_PASS=davidmatablog
ports:
- 5672:5672
- 15672:15672
networks:
- myNetwork
consul:
image: consul:1.9.10
container_name: service-discovery
restart: always
ports:
- 8500:8500
networks:
- myNetwork
fabio:
image: fabiolb/fabio:1.5.15-go1.15.5
container_name: service-balancer
environment:
- FABIO_REGISTRY_CONSUL_ADDR=service-discovery:8500
ports:
- 9998:9998
- 9999:9999
networks:
- myNetwork
depends_on:
- consul
config:
image: nacos/nacos-server:v2.0.3
container_name: service-config
environment:
- MODE=standalone
ports:
- 8848:8848
networks:
- myNetwork
networks:
myNetwork:
name: micro:
In this file is where the Fabio establishes communication with Consul
Docker-compose-opensourceTools.yml
This file will run a Redis server, Jaeger server, Prometheus, Grafana and Seq
docker-compose -f docker-compose-opensourceTools.yml up
the file:
services
database-redis:
command:
- redis
- --requirepass
- "password"
container_name: database-redis
environment:
REDIS_PASSWORD: davidpassword
image: database-redis
ports:
- 6379:6379
service-tracer:
command:
- jaegertracing/all-in-one:latest
container_name: service-tracer
environment:
COLLECTOR_ZIPKIN_HTTP_PORT: '9411'
image: micro
networks:
- micro
ports:
- 55775:5775/udp
- 6831:6831/udp
- 6832:6832/udp
- 5778:5778
- 16686:16686
- 14268:14268
- 9411:9411
service-metrics:
container_name: service-metrics
image: prom/prometheus:v2.31.1
networks:
- micro
ports:
- 9090:9090
volumes:
- C:/docker/net5/prometeus/local/prometheus.yml:/etc/prometheus/prometheus.yml
service-analytics:
command:
- grafana/grafana:8.2.3
container_name: service-analytics
image: micro
networks:
- micro
ports:
- 3000:3000
service-log:
container_name: service-log
environment:
ACCEPT_EULA: Y
image: datalust/seq:2021.3
ports:
- 5371:80
version: '3':
Containerize the applications
Containerizing an application is very important in a microservice architecture because of the portability (write once, run anywhere) since a container bundles all the dependencies needed to run the application. It ensures that the application will behave in the same manner in any environment. This also comes with other benefits as it allows CD/CI, it allows calling services on demand, etc.
The most common tool to containerize applications is Docker.
A complete guide on how to containerize each microservice using Docker and how to deploy applications using Kubernetes can be found in my article deploying microservices using Docker and Kubernetes
Conclusion:
Microservices are a great architecture and they bring a lot of advantages against monolithic applications, but microservices also bring a ton of challenges that need to be addressed by adding more components to the system, and not only that, things like testing, debugging, and finding errors get more complicated.
For that reason, microservices should not be used as bulletproof. Architects need to consider if the complexity of having a microservice architecture is worth it for the problem they need to solve.
In the end, is just a matter of choosing the right tool to solve the problem.