Distributed systems using microservices
You can read this article in Portuguese by clicking here.
Please comment and share your opinion!
The vast majority of managers have gone through the classic dilemma of corporate systems "build or buy". That is, finding the best option between build a system from scratch or buy a ready system. This discussion, roughly speaking, revolves around the elements "cost" and "timing" that consist the basis used by the manager to build his defense and justify his decision. But this discussion at a later stage goes much further, and extends to the complex issues of the application life cycle. In this moment the headache starts.
With this article I hope to take to senior managers and other interested parties an insight into how the new architectural paradigms can be very useful in corporations if used correctly. Activities such as build, buy, rent and replace systems cease to be a big deal and become trivial activities, such as hiring another regular provider.
Let's do a brief analysis on how the ITs worked before, 10 or 20 years ago: the systems offert on the market was low. Systems were expensive and it was very difficult to build them internally. Companies used to bought systems that had specific functionality, then used to integrate them with other systems within existing possibilities at the time (database, text files, etc.), and leave the system running ad eternum. If needed, new acquisitions were made. When the company chose to build something, they usually used as starting point a purchased system that started to receive customizations. This scenario suggests many systems with distinct roles using various technologies, non-standardized integrations and a large IT staff specialized in each of these technologies. The use of obsolete technology is common, and worse, entire teams stopped in time becoming hostages of that system to keep his job, as well as the company becomes hostage of this team to maintain the system that is so desperately needed, since the supply professionals in such outdated technology market is scarce. In old companies is very common to find the use of 100 or more systems, among which 2 or 3 are larger and "problematic" because of the difficulty of keeping them. Many of these systems are quite similar.
Currently the offer of systems is much higher. We have a wide range of systems to buy, rent, and a wide range of technologies to build if this is the option. Nowadays we can find customizable systems and a huge offer of professionals in the market to support them. The integration possibilities are endless, and robust and standardized patterns can help a lot. The offer of technical professionals in the market is much higher now than in the past.
In both scenarios, the running systems can suffer a lot of adjustments to best suit the company, and its functional coverage is expanded as business needs. And's how it should be. The business units are becoming more and more sophisticated and learning to take advantage of technology, starting to demand "functional adjustments" that will "swell up" the system. And there is where the danger lives: If the changes are not made in a structured way, no matter if the technology used is modern and robust, but can those changes can cause serious structural problems that are not easy to be fixed.
These "non-structured" changes can occur for a number of reasons, among which are:
There is no a system architecture rule in the company: the IT team is formed by technicians but there is no structured work to the systems development. Developers are building solutions using only their technical knowledge of that employed technology, creating new and new features often without worrying about the structuring of medium to long term. With passing time the system starts getting slow, and to have increasingly frequent incidents ( "go-and-return"). Maintaining the source code becomes more challenging.
Fast delivery pressure: This meets goodwill / ingenuity of the developers, who want to respond quickly (often to demonstrate responsiveness) but they ignore basic questions as technical sustainability and security. When the team has outdated professionals, the problem is even greater because certainly the technology is not being used in a well structure.
Do what users want: As well said Josh Holmes (then Microsoft Architecture Head) in a lecture I had the privilege of attending, IT needs to do what users need and not what they want. The IT staff is required to identify the users "dream", and then provide an adequate solution. Users know what they want but do not know what they need. The anxiety to leave a happy user can create serious functional gaps in the system and also screens and routines "stock" that will never be used (this goes against the Lean IT practices that I will not address in this article.)
That said, the bad news is that there is not a ready recipe. As there exists many famous architecture frameworks, such as the Zachman Framework, we still depending on people, ideas and technologies. All theories need a "head" to be interpreted and planned.
The good news is that the impacts on system expansion can be minimized if the concept of distributed systems is well performed.
At first when we talk about distributed systems, the first idea that comes to our mind is relative to the location where the system is running: on the user PC, cloud, etc. But the issue is broader. The distribution strategy that I would address refers to its features. A system is no longer a monolithic application (a single source code of a unique technology using a single database) and becomes a different set of applications (various technologies and databases) sharing a single user access interface.
To better explaining I will take GMail by example, the popular free email from Google. This webmail system consists of a web graphical user interface (presentation layer), built using AngularJS technology, which allows the user quickly navigate through the features (application layer). Once the user makes access, the browser loads all the screens, and as the user will accessing the features, a corresponding API is invoked. Send the e-mail is an API, list the emails is another, and so on. All APIs are available on a web interface using a technology called REST. These APIs are called microservices.
The beauty of these microservices is that the interface is the only thing in common between them, but what runs "behind" no one knows. In other words, the technology of applications is transparent to the user. In this case mentioned (webmail), Google could simply change all the application layer without the user noticing, because the presentation layer remains the same. The microservices isolate the application layer so the system becomes technology agnostic.
Another example: Imagine a company that works with this microservices concept. Let’s suppose this company is replacing an old CRM system with a modern one. The work would basically make changes in the application layer, preserving the structure of the existing APIs. If the choice of the new application is bad, it is a new exchange. But if the company does not use this microservices approach, I'm sorry. You will need to train all users to the new system in a "Big Bang" migration. And good luck. If your system of choice is wrong, repeat it all over again.
The microservices approach becomes very useful when the focus is unify digital channels because we have a lot of applications (or services) reuse. In this case we have the various channels using their many interfaces but the same backend. A URA or a mobile APP, for example, are nothing more than frontends that use the same set of microservices. And it can be extended to connected devices (Internet of Things - IOT).
Another great advantage of this microservices approach is that things are much more organized. The IT system will have several different work fronts, each responsible for one or more microservices. Because of this, the facility to increase the teams is bigger, and the IT delivery capacity increases. Teams become "clients" of the other, not needing to know what runs behind the APIs in the application layer. And because of that, the teams now have more autonomy.
Since the systems are now distributed applications, they can be installed in different data centers using what makes more sense. For example, applications that require performance and low latency can be hosted on a local data center. And peripheral applications can be hosted on cheaper data centers. It is also possible "play" with the application's performance by allocating less important and less used in low performance servers.
Below I will go into questions that should be explored in microservices adopting:
Granularity: When IT staff want to adopt this concept, the first step is to define the appropriate granularity for microservices. In other words, define which are the microservices that make sense to exist, considering technical and business aspects. Technical aspects relate to routines that can be reused, such as email sending and user authentication. Now the business aspects relates to features. It should be released microservices that make sense for the business in question. The most common approach is the domain driven design (DDD). It is important that the team works focusing on all digital channels and always thinking about reuse, thus avoiding creating duplicate applications
Coupling: Defined which will be the microservices, they will interact with each other, and it also requires attention. The way this interaction occurs is called coupling. Systemic routines that need an immediate return with execution guarantee, as stock transactions, should be invoked synchronously. However the routines that does not require an immediate return or simply trigger an action, such as sending an email, must be invoked asynchronously by exchanging messages. The more unbound (or asynchronous) better. If a failure occurs on a single service, it may not affect the execution of the rest of the system.
Monitoring: The possibility of failure in applications will always exist. When we talk about dozens or even hundreds of applications, we need to give enough attention to monitoring. Monitoring should address not only the generated errors, but the duration and execution statistics. Automated alerts need to be configured and alerts sent to the respective responsable team to act immediately.
Safety: Because it is made available APIs for HTTP (Web), there must be a very big concern for safety Companies can choose to expose on the internet only part of microservices, leaving others restricted to a VPN access for example. A good practice is the adoption of a bus layer to act as a proxy of requests. This bus passes to account all APIs, making the user targeting the desired application. This bus layer can have useful features such as user authentication control, threshold and monitoring facilities. The bus layer helps to prevent exposure of applications directly to the user, which may be in a demilitarized zone (DMZ) and applications in restricted areas. It is important to clear the two key concepts for user access: authentication and authorization. Authentication is a process that ensure that the user is himself, either by password, security keys, certificates, etc. And authorization must be present in each microservice to ensure that the user in question can access specified resource, or access it with limitations
Versioning: Another important point to be thought is the versioning control of microservices. The teams responsible for the maintenance of the APIs should take the care of not making changes that "break" the application, since we can have dozens or even hundreds of users of services. A simple and useful way is the semantic versioning. This versioning consists of the composition of three numerical sequences to identify the current version of each microservice. The representation has the X.Y.Z format, where X indicates the evolution that changes to the API call (requires changes in the client), Y indicates an evolution that does not imply changes in the client, and Z indicates the BUG correction. When a new microservice is released, it starts in version 1.0.0. If a correction is made, the version is now 1.0.1. If an improvement is made but this implies no change to their API users (customers), the version is now 1.1.1. If a new version is released, and this implies changes in their requests (the clients will not work if the call is made the same way as version 1), the new version is now 2.0.0.
Other issues such as continuous integration and automated testing should be considered. The amount of things to control greatly increases, and automated processes should be planned from the beginning. Documentation is also a key to project success.
Finally, it is concluded that microservices constitute an excellent system architecture approach and it has many details that require attention. Whenever is possible an architect should be involved to contribute to the decisions. There is a need raise the maturity of IT times, since the teams become more autonomous and therefore must be further prepared, if possible with a DevOps bias. The IT support of senior management is also important, and after all no one should be more interested than him in a sustainable IT medium to long term.
👏👏👏