Microservices Integration - Part 2

This is the second part of an article I wrote about Microservices Integration.

In the previous post I raised some concerns about using Domain Events as a mean of integrating different Microservices and how this practice may eventually spoil an architecture by being a barrier to scalability and evolution of the overall system.

Introduction

We live in a world where Data is Business and for many organizations Data is the Business.

Microservices are a very nice concept, they enable scalable and evolutionary systems, they fully leverage the underlying distributed ecosystem, they give software developers freedom to choose their own stack, tools and language. Being polyglot is definitely an advantage, however freedom in big and complex environments often leads to anarchy. Changing language, design and/or implementation of a module is relatively "easy" changing the architecture of a system is not, especially in a distributed environment. The architecture is an organizational asset. Data and Data Flows impact architectures more then what we may think.

Microservices need to communicate and collaborate with one another in a Distributed Environment. The bigger picture presents a scenario where different applications, from different domains, need to be integrated together. For instance a CRM, an Accountancy System and an E-Commerce Platform need to exchange data with one another. Furthermore application domains need also to integrate with the organizational intelligence. Applications in fact need to connect with Big Data Analytics platforms too. However, between different applications and the Big Data Analytics there usually is the vast ETL ocean.

It is great seeing new emerging trends like No ETL solutions, APIs vs ETLs or End To End Visual Pipelines tools. Still problems are further down the line. The problem lies on how we structure, design, organize, store, orchestrate, secure and make available our data and data flows.

In an era of Data Liquidity, Data Transparency (intended as a sort of Location Transparency applied to data) is becoming a must. Data is therefore a first class citizen while designing and implementing software systems. This implies not only a technical but also a cultural shift on how we design and implement such systems.

The need for Collaboration

In Microservices based applications, collaboration typically implies exchanging messages over a Message Bus or a Message Broker.

Event Collaboration enables different services to communicate and collaborate with one another in a very loose coupling. New services can be transparently added and collaborate and listen to published events. Existing services in fact don't need to know about newcomers. Events mainly serve two purposes: 

  • Facts: They act as an audit trail for error recovery and troubleshooting;
  • Triggers: They act as an input request for triggering processing logic on other components.

The flow of a system is determined by events, facts occurred on one service, which implicitly trigger some actions on other services. Events are immutable, they never change in time. Newly generated events, before being emitted, change the state of their own service. 

You may want to learn more about Event Collaboration and related patterns. Martin Fowler has written a great article "Focusing on Events" from which I stole some wordings and definitions.

In current Microservices implementations, very often Events are not just an input for request triggering in a context of Service Collaboration. Events in fact tries to act as Service Integration objects, by holding all necessary data, in order to satisfy nearby services in the need to fulfill their business requirements. The existence of Fat Events is the exception that confirm the rule. 

In order to reconcile with the part 1 of the article, Microservices Integration - Part 1, events are typically used to interact with other parties serving mainly two different purposes: 

  • Service Collaboration: notifying interested parties that something has happened;
  • Service Integration: telling or describing interested parties what exactly happened.

If you all agree with the content of these articles, Domain Events are not and should not be used as Integration Events, then you would also agree that we need to look for other tools in our handbag in order to fulfill the need for Service Integration.

The two faces of Integration

Although there are different ways to achieve Microservices Integration, DDD oriented Microservice applications are often implemented using two different patterns which work well together: Event Sourcing (ES) and Command Query Responsibility Segregation (CQRS).

CQRS has historically been inspired by Command Query Separation (CQS) introduced by Bertrand Meyer in his book "Object Oriented Software Construction" The main idea behind CQS, citing Wikipedia, is that "A method should either change state of an object, or return a result, but not both. In other words, asking the question should not change the answer. More formally, methods should return a value only if they are referentially transparent and hence possess no side effects".

According to CQS there are two methods:

  • Commands: they operate and change the state of an object.
  • Queries: return the state of an object without modifying it.

Greg Young and Udi Dahan have further extended the CQS pattern in the CQRS pattern whereas the former relates to a Class while the latter relates to a Bounded Context.

CQRS consists of two different stores for writes and reads, a store for Commands and a store for Queries. Like shown in the diagram below:

CQRS presents different advantages. Some of them are:

  • It decouples the writing Model from the reading Model as they may be different.
  • It enables using different polyglot technologies between Writing Side and Reading Side.
  • It fits very well with Event Driven based applications.

Most importantly I introduced CQRS, independently from the underlying architecture, Monoliths, SOAs or Microservices, because software developers have to always deal with two sides of Integration, the Writing Side and the Reading Side.

Although CQRS is not a must in your architecture, it makes this distinction even more explicit. In CQRS Commands are the Writing Side while Queries are the Reading Side of Integration.

The Writing Side of Integration

The Writing Side is the heart of the system, this is where the Domain Model resides and business logic takes place. The Writing Side holds the State of a service which is indeed the Source of Truth.

Event Sourcing consists of an Event Store where all historical events, determining the service's State, are persisted in a sequential order. Compared to more traditional implementations where the Mutating State of a service is persisted, in Event Sourcing all immutable events are persisted and the Mutating State typically resides in memory. From time to time, depending on your business logic, Snapshot Events are issued in order to perform some Log/Event Compaction. The State of a service can be reconstructed at any point in time, anytime, by replying previously persisted events.

If we consider a relatively simple customer checkout scenario of a typical e-commerce application, we may end-up with a workflow similar to the one shown in the diagram below:

Like in an Event Collaboration fashion each service react to some of the events published by other services. For instance the Order Service reacts to the CheckoutRequested event published by the Checkout Service and it updates its own internal Order status to Created. After updating its internal status the Order Service publishes an OrderCreated event for other services to be consumed.

Following is a diagram showing how Events and State relate to each other, throughout time, in the Checkout Service:

Each time a service fails for whatever reason, it restarts by replying all past events up to the latest event and it re-creates the current internal State.

Sometimes it is convenient, for performance reasons, to perform some Log/Event Compaction. A service State in fact may be made up of many events, and not just of three events like in our Checkout Service example. Furthermore a State represents an Entity and depending on how you design your service you may end up managing all your checkout entities in one single service and this translates in re-processing thousands of entities by the number of events occurred. Restarting a service may require a while until the State/s is re-stablished and this may have bad impacts on Availability.

Compaction is performed by emitting a special Snapshot Event on the Event Store. Snapshots events are internal and they never get published outside the Bounded Context. Following is a diagram describing how Snapshot Events work:

The CheckoutSnapshot event is saved into the Event Store soon after the CheckoutConfirmed event is persisted. Any event before the Snapshot Event is logically deleted from the Event Store. Upon restart the service re-creates the State by processing the latest available Snapshot and any other event after it. Snapshot Events help reducing the space utilization and the time services need to perform a restart. They increase the overall system Availability as less messages need to be processed in order to re-establish the current service's State.

Due to the distributed nature of Data Management in Microservices based applications, Availability is a fundamental property that needs to be preserved in such environments. According to the CAP theorem it is impossible for a Distributed System to simultaneously provide more than two out of the following three guarantees: Consistency, Availability, Partition Tolerance:

  • (C)onsistency is the property of a Distributed System whereas every read receives the most recent write or an error. 
  • (A)vailability is the property of a Distributed System whereas every request receives a (non-error) response without guarantee that it contains the most recent write. 
  • (P)artition Tolerance is the property of a Distributed System whereas the system continues to operate in face of a partition (node) / communication failure. 

Without going into too many details, I may write another article about it in the future, in Microservices based applications Consistency is usually traded in favor of Availability and Partition Tolerance.

Due the distributed nature of Microservices, maintaining Strong Consistency is extremely difficult, Eventual Consistency is instead the norm. Different nodes may have a different view of the world and all nodes will eventually have a consistent view of the overall system state.

A slightly more complex checkout scenario may require Transactional Consistency across Bounded Contexts.

Let's assume that by business requirement a checkout is considered to be successful if:

  • An order is successfully created;
  • All order items (products and quantities) exist in the Inventory and they are successfully reserved from the Inventory;
  • The payment is processed successfully.

If any of the above conditions fail the checkout needs to fail too. Furthermore in case of failure any changes to the Order Service, Reservation Service or Payment Service need to be rolled back and services re-established to their original state.

Achieving Long-Lived Transactions or Distributed Transactions across locations and/or trusted boundaries using the classic ACID model with Two-Phase Commit is not easy.

In such cases the Saga pattern, a pattern for Failure Management, comes to the rescue. Saga advices splitting work into individual transactions that can be reversed after work has been performed and committed.

Back to our refined example, the Order Service would have an activity for orders that knows both how to create an order and how to cancel it, the Payment Service for payments that knows both how to process a payment or cancel it, the Reservation Service would have an activity that knows how to reserve items or to cancel items reservation.

According to Saga, only internal activities within a Bounded Context can be performed atomically, the overall Consistency across different Bounded Context is taken care by Saga itself. Saga has the responsibility to either get the overall business transaction completed or to leave the system in a known State. In case of failures a business rollback procedure, it may be the Saga Trip itself, is performed and some compensation steps or activities in reverse order are applied.

Following is a diagram illustrating a typical Saga workflow applied to the checkout show case:

In DDD Saga Orchestration is typically achieved by using a Process Manager, quite often Saga and Process Manager are used interchangeably. Routing Slip is another way to implement Saga via Choreography. Choreography typically scales better then Orchestration.

Saga does not specify the sequence by which each transaction needs to occur. We can, in fact, perform each transaction in concurrent fashion by making all requests at the same time: Order, Reservation and Payment requests in our case. Or else we could perform transactions in a Risk Centric Order depending on our business case. For instance we could make a concurrent call to Order Service and Reservation Service concurrently and if one of the two fails we can just go back and reject the checkout. The Payment Service, usually a third party service, presents a higher risk as it involves a charge back in case of failure. Therefore following the Risk Centric Order pattern we could perform the first two calls concurrently (Order and Reservation) and if they succeed we could then perform the third one (Payment). Running concurrent calls improve the overall system performance, however they are also more difficult to trace and coordinate.

In Saga, some Retries Mechanisms may also exist. Services need to be designed to be Idempotent in order to avoid unwanted behavior like, duplicating an order, reserving items twice or even worst double charging a customer.

Finally Saga presents the following Recovery Modes:

  • Backward: if any transaction fails, then we need to go back and undo any successful transaction.
  • Forward: we keep retrying transactions until ultimately they become successful. This pattern obviously implies that a transaction will ultimately be successful.

We are towards the end of the Writing Side of Integration, however for the sake of completeness it may be useful to introduce another DDD pattern named Aggregates, as part of this "chapter".

An Aggregate is a cluster of Domain Objects that can be treaded as a unit for the purpose of data changes. They consist of one or more Entities and Value Objects that change together.

Each Aggregate has a root, Root Aggregate, which is the parent Entity of all members of the Aggregate. The Root Aggregate controls Consistency rules over child members. Deleting the Root Aggregate, for example, deletes the entire Aggregate. Aggregate Consistency needs to be considered as a whole before changes can be applied to the Aggregate itself.

Getting back to the checkout example, the Order Aggregate consists of an Order Entity which is the Root Aggregate, one or more Items (ProductId, Quantity, etc.) Value Objects, along with other Value Objects such as PaymentInfo and DeliveryInfo.

Following is a diagram representing the Order Aggregate:

CustomerId on the Order object and ProductId on the Item object reference the respective primary keys of Customer Aggregate and Product Aggregate.

Aggregates help to split a Domain Model into smaller pieces that are easier to understand. They allow to partition objects model across different services, guaranteeing Loosely Connection between different Aggregates as they are referenced by primary key and not by object reference like in traditional Object Oriented implementations.

Finally, it may be worth underlying that Writing Side Integration corresponds to the C letter of CQRS.

The Reading Side of Integration

The Reading Side is the dark dirty side of Microservices. This is mainly the distributed nature of Data Management. Microservices are "federated", they hold their own database, data and data model. Models are partitioned across different Microservices.

The priority for developers and data administrators, when designing software systems, is often focused on how the data is stored, as opposed to how it's read. On the other side developers often need to build services that query data from multiple sources in complex ways.

The Writing Side deals mostly with Collaboration and Transactional Consistency needs, while the Reading Side deals with:

  • Validation: a service which needs to perform some data validations. For instance a Shipping Service which needs to contact the Customer Service in order to validate some customer data such as customer id or email.
  • Composition: a service which requires additional information from other services in order to produce some operational* reports or dashboards. For instance the Order Service dashboard that needs to have additional static data such as customer region, country, etc. from the Customer Service.

Validation is relatively simple as it typically requires querying data by primary keys or unique keys. For instance by customer id or by customer email.

By using Projections, views supporting query use cases, Entities or Aggregates gets materialized in Repositories. Event Sourcing implies replying, asynchronously, the streams of events in order to derive the current State. Like shown in the right side of the diagram below:

Projecting, the process of aggregating stream of events into Read Models, is decoupled from the process responsible for creating Write Models. Projecting usually is an asynchronous activity. The diagram clearly shows the elapse occurring between Write Models (States) and Read Models (Projections).

Microservices may have more then one Projection, depending on business needs. Typically Microservices have at least one, which I'll name Snapshot Projection.

The Snapshot Projection is the representation of the Service's internal State for the external world.

Snapshot Projections typically are accessed by primary or unique keys and they serve basic Validation and/or Integration needs. Projections are typically exposed via REST Apis or RPC. For instance a Customer View may expose an API to find a customer by id (PK) and another one to find a customer by email (UK). Like in the example shown below:

The Email Service is responsible for sending out emails to customers once shipments are confirmed, by reacting to ShipmentConfirmed events, and out for delivery. The ShipmentConfirmed event contains, among other data, the id of the customer which will receive the shipment. The Email Service needs to retrieve the customer email address in order to be able to send the email to the correct customer. The Email Service calls a dedicated Customer Service API, in a request / response fashion, using the customer id found on the ShipmentConfirmed event and it retrieves the Customer object holding the customer email address. The Email Service is now able to send the email to the correct recipient.

Exposing Projections via APIs is, in general, a good practice as it guarantees a certain degree of Encapsulation and Access Control to data.

However things are not always as easy as making simple Lookup Calls. Sometimes, maybe more then sometimes, we need to deal with complex queries, like for instance, when we need to build views on the UI. Complex views require orchestrating multiple calls that span across different Services.

Let's take as an example the search page of an e-commerce application. A search page is usually made of different components. To keep it simple, let's assume our page only comprises of two components: search results and recommended products. Thanks to the API Gateway pattern we could coordinate queries to the Search Service and the Recommendation Service passing a search string to their respective APIs, get the results back and display them on our page. Easy! isn't it? Unfortunately, in a real case scenario it's not as easy as it looks like.

When displaying a search page, business may want to add extra details for each line item. For instance, for each product we may want to give extra details such as whether the product is in stock or not, whether it is on sale. Business may want to exclude out of stock products from the recommended products. They may want to exclude recommended products from search. Finally they may also want to personalize recommendation and search based on some user's or customer's properties such as gender, age or others.

The following diagram illustrates all relations and dependencies between the above mentioned services taking into account the new business requirements:

Let's write some naive, non optimized, pseudo code for our SearchPageUI Microservice:

fetch Customer object from the CustomerService using CustomerId

fetch RecommendedItems from RecommendationService using SearchString and Customer attributes

for each RecommendedItem in RecommendedItems:

  fetch Inventory object from InventoryService using ProductId:
  
    if Inventory.quantity > 0:
            
      set RecommendedItem.Visible true
      
    else:
      
      set RecommendedItem.visible false
      
  fetch Promotion object from PromotionService using ProductId:
      
    if Promotion.ProductId == RecommendedItem.ProductId:
        
      set RecommendedItem.Promotion true
      

fetch SearchedItems from SearchService using SearchString and Customer attributes:

for each SearchedItem in SearchedItems:

  fetch Inventory object from InventoryService using ProductId:
  
    if Inventory.Quantity > 0:
            
      set SearchedItem.InStock true
      
    else:
      
      set SearchedItem.InStock false
      
  fetch Promotion object from PromotionService using ProductId:
      
    if Promotion.ProductId = SearchedItem.ProductId:
        
      set SearchedItem.Promotion true

  for each RecommendedItem in RecommendedItems:

    if RecommendedItem.ProductId == SearchedItem.ProductId:
   
      set SearchedItem.Visible false

    else

      set SearchedItem.Visible true

return [RecommendedItems, SearchedItems]

This slightly more complex scenario involves many joins, iterations and APIs calls. If we consider the case of a Product Page which contains at least two or more different types of Recommendations, for instance most bought items, most viewed items, recently viewed items, items viewed together, etc... We can really get an idea on how Composition is a very hard business in Microservices based applications.

Nevertheless we got some door left for optimization. One option is reducing the number of APIs call by creating a new custom, so called, Smart End-Point. The option consists in collapsing many APIs calls into one. For Instance we can introduce a getEntityByIds End-Point on the Inventory Service which takes as input a list of product ids rather then a single product id. The same technique might be applied to Promotion and Search Services.

Still an e-commerce search page is far more sophisticated. Especially if we talk about personalized search and/or personalized recommendations. By adding extra layers of complexity such as Filtering and/or Faceting on categories, sub-categories, brands, size and colors, or adding extra displaying gadgets like ratings and/or reviews, we may end up with lots of entity relations, dependencies and nested API calls. Consequences are Performance Degradation, System Instability, Poor Maintainability and Scalability.

Services depend on each other not only on functionalities they expose but also on the underlying data they hold. This is especially evident when implementing operational dashboards requiring lots of data Integration and Composition.

The solution does not always lay in creating and orchestrating APIs and unfortunately we don't always have a one fits all solution.

Another option we have is to perform data ingestion within a Microservice itself. This is how search or recommendation services typically are implemented. By moving On- Line Dependancies to Off-Line Dependancies. Moving Off-Line translates into removing runtime dependencies from services like show in the diagram below:

The above diagram shows Domain Events helps decoupling Microservices from runtime dependencies. The Search Service is still coupled to Recommendation Service APIs, for obvious reasons. The Recommendation Service changes its results based on searching terms which are only known at runtime.

Although different alternative exist, leveraging Domain Events is typically the primary option for exchanging/sharing data between Microservices.

Other viable solutions may be Extract Transform Load (ETL), Change Data Capture (CDC) or fetching plain APIs. One problem posed by ETL and APIs calls is deciding about Data Lifecycle, how often data need to be refreshed. Furthermore ETL adds an extract layer of technology complexity to the entire ecosystem. APIs need to be properly designed and support caching and pagination. CDC refreshes data in quasi real time manner, however as Microservices are based on polyglot persistence (Document Oriented, Graph, Relational, etc.) CDC may not always be applicable.

It seems like the best option we have is using Domain Events. Indeed message passing is simple, convenient and keeps the system current. Nevertheless this type of integration via Domain Events may not be optimal. Information in fact can be spread across multiple Domain Events even within the same Service. For instance, in our example, the Search Service and Recommendation Service require some data from the Inventory Service. Data will probably be spread across different Domain Events, ProductCreated, QuantityUpdated, ColorAdded, etc. Each Consumer Service need to know about the logic and the content of each Producer Service published Domain Events. Furthermore as already outlined in the first article, Domain Events in the context of Service Integration may couple services preventing the overall system to scale and evolve.

Another possible solution would be implementing a Blind Logic like the one shown in the diagram below:

Consumer Services, Search and Recommendation, blindly consume any event published by the Inventory Service. All events must contain the product id, which is then used to access, via API calls, Inventory Snapshots. Inventory Snapshots are Snapshot Projections which hold any possible information about a Product in the Inventory.

Thanks to the advent of streaming technologies such as Apache Kafka we can look at solving "traditional" problems in non-traditional ways. We can turn things upside down and see if we can find potentially "better" models. It's time to introduce the concept of Snapshot Propagation.

Snapshot Propagation

Current Microservices implementations suffer from what Ben Stopford defines as Data Dichotomy: "Data Systems are about exposing data. Microservices are about hiding it. Reality is that business services heavily rely on one other’s data and in a Microservice architecture, domain models, transactions and queries are difficult to decompose. On one side encapsulation encourages us to hide data, decoupling services from one another so they can continue to change and grow independently. On the other side data systems have little to do with encapsulation. In fact, databases do everything they can to expose the data they hold. As we evolve and grow service-based systems, we see the effects of this Data Dichotomy play out in a couple of different ways. Either a service interface will grow, exposing an increasing set of functions, to the point it starts to look like some form of cookie, homegrown database. Alternatively, frustration will kick in and we add some way of extracting and moving whole datasets, en masse, from service to service. The more shared data is hidden inside a service boundary, the more complex the interface will likely become, and the harder it will be to join datasets across different services. Data amplifies the service boundary problem.".

Pat Helland, author of "Data on the Outside vs Data on the Inside" paper, tells us that we need to think very differently about the data encapsulated inside of a service, and the data that is exchanged between services. Data on the Inside refers to the encapsulated private data contained within a service while Data on the Outside refers to the information that flows between independent services. We need encapsulation so we don’t expose a service’s internal state. However we also need to make it easy for services to get access to Shared Data so they can get on and do their job.

In DDD an Entity represents a single instance of the Domain Object with its own identity. Citing Eric Evans: "Many objects are not fundamentally defined by their attributes, but rather by a thread of continuity and identity. Entities have some properties, and while these properties can change over time their identifier will always stay the same. An entity is therefore a state at one particular point in time and it’s materialized by a thread of contiguous events".

Entities hold information about a certain service State at one particular point in time. Typically most Entities properties (most probably all) are materialized, using Materialized Views, in a data store, following the initial thread of continuity and identity. Finally Entities data are typically exposed via REST Apis, in a traditional Request/Response pattern, ready to be queried by some other services.

Martin Kleppmann gave a very interesting talk "Turning the Database inside out with Apache Samza" where he states: "Databases are global, shared, mutable state. That’s the way it has been since the 1960s, and no amount of NoSQL has changed that. However, most self-respecting developers have got rid of mutable global variables in their code long ago. A more promising model, used in some systems, is to think of a database as an always-growing collection of immutable facts. You can query it at some point in time — but that’s still old, imperative style thinking. A more fruitful approach is to take the streams of facts as they come in, and functionally process them in real-time.".

The basic concept behind Snapshot Propagation is about liberating Snapshot Projections over a stream. In the same exact way Domain Events get published.

In other words, the idea is about turning the following, previously introduced, diagram:

Into the following one:

Snapshots are special "events", they are Integration Events. Although some may argue they are a special type of Domain Events, reality is that they are not. Domain Events are immutable events which should not carry Entities or Value Objects data and most importantly they are of interest to the business. Snapshots instead are State Transfer Representation events. They carry and encapsulate domain Entities.

Snapshot Propagation is a technique for distributing change (data) among services. Snapshot Propagation decouple Microservices from Requirement/Data Coupling as all information about a particular service are pre-computed and available on the Snapshot itself.

Each time a Service State (Data on the Inside) changes due to some event occurring, we make it available via Snapshots (Data on the Outside) for nearby services to consume them, typically over over a Message Broker or an Event Bus (Turning the Database inside out).

Snapshot Propagation may not be free from some criticism:

  • Snapshots tend to be bigger in size compared to Domain Events. Serialization / Deserialization of Snapshots may be expensive especially if they are particularly big. Reality is that Snapshots are Serialized / Deserialized anyway via Projections. Furthermore is Snapshots are particularly big most of the time the Domain Model is wrong.
  • By spreading Snapshots around we can't guarantee Data Protection, Security & Accesso Control. You may use Access Control List in order to regulate Data Protection and Access Control. Furthermore some Snapshots data may be encrypted using Symmetrical or Asymmetrical Encryption.
  • Domain Events are redundant as Snapshots may substitute Domain Events for Service Collaboration. In realty Domain Events may contain some extra attributes that are only functional to a specific service's business logic. These attributes may never be reflected on the Snapshot. Bottom line, Domain Events are for Service Collaboration, Snapshots are for Service Integration.

People may turn their nose up by reading these few lines about Snapshot Propagation. Still practice often is so much more complex then theory.

We almost reached the end of Microservices Integration - Part 2. However before giving the final hit, I'd like to thank few people:

  • Alberto Lobrano: for his trust by giving me the chance of running the Big Data Analytics in one of the most exciting projects on earth. For influencing my vision and letting me work with incredible people and technologies.
  • Lorenzo Sommaruga: for reviewing and influencing part of this material. For pushing me in writing these articles and making them simpler.
  • Giampaolo Trapasso: for spending days, nights and week-ends with me at discussing these topics and hammering down code in order to make things work in past projects.
  • Vaughn Vernon: a DDD world champion who introduced me to DDD thanks to his simple, effective and pragmatic book: "DOMAIN-DRIVEN DESIGN DISTILLED"
  • Giuseppe Insalaco: for influencing the way I think: breaking things apart to understand the inside, turning things upside down to build a different view, turning complex problems into simple solutions and much more... are all things I learned from him.

I tried not give you the usual TODO Microservice. I tried to give you as many details as possible. Probably things are not completely clear yet, although I really hope some are. Nevertheless it's not over yet, I still have something cooking on the outside: The Anatomy of a Microservice.

Following is an anticipation of the next article where I'll also propose a Microservices Blueprint Architecture based on Vert.x & Kafka:

Please, apart from my english, forgive my crappy diagrams ;)



Thank you Stefano for your amazing work. Snapshot Propagation seems an interesting concept that i am eager to test. Please keep going with these articles and do not worry: your english and diagrams are great :)

Like
Reply

To view or add a comment, sign in

More articles by Stefano Rocco

  • Your AI Agent is Ready. Is Your Company?

    Before you read: This piece is intentionally provocative. If it makes you uncomfortable, that’s the point.

  • Future Proof Data lakes

    A Data Lake is a scalable, centralized repository that can store raw data. Data lakes differ from data warehouses as…

  • Evolutionary Systems

    Evolutionary Systems are a type of system, which reproduce with mutation whereby the most fit elements survive, and the…

    4 Comments
  • Microservices Contracts

    Reasoning about data is the hardest part of Microservices, however dealing with Microservices Communication comes…

    5 Comments
  • Microservices: A Blueprint Architecture

    Recently I wrote a couple of articles about Microservices Integration, Part 1 and Part 2 where I introduced some of the…

    8 Comments
  • Microservices Integration - Part 1

    Reasoning about Integration, especially about data integration, is one of the hardest part while designing…

    20 Comments

Others also viewed

Explore content categories