Cloud Design Patterns. Design and implementation patterns: Part 2

Cloud Design Patterns. Design and implementation patterns: Part 2


Pipes and Filters

The Pipes and Filters pattern allows breaking down a complex processing task into a collection of smaller, independent, and reusable components (filters) interconnected by channels (pipes). Filters are typically stateless and execute specific processing tasks on the data, while pipes serve as communication channels connecting the output of one filter to the input of another.

The following diagram depicts an architectural scenario in which two data sources supply data to two distinct modules for processing. The processed data from these modules is directed to the sink endpoint. Although the modules share functionally similar tasks, they were independently designed, and the code implementing these tasks is tightly coupled within each module.

Article content


This can be decomposed into a set of distinct components, each carrying out a specific task. Subsequently, these filters can be assembled in a pipeline to leverage component reuse and avoid code duplication.

Article content
Pipes and filters

Advantages:

  • Can improve system testability since filters can be tested independently.
  • The system can be divided into modular components (filters), which can be reused and independently scaled up if necessary.
  • Filters can be easily replaced or expanded. They can potentially be executed in parallel, reducing overall processing time.

Challenges:

  • In some cases, a long chain of filters leading to deep piping may be required which can make the system more complex.
  • If a task updates/inserts data into persistent storage and fails after working with the DB, then it may be assigned to another instance, which may result in the data being published more than once.
  • Similarly, if a filter fails after passing the data to next filter then another instance for the same filter may post the same data more than one time.
  • Passing data between filters usually requires serialization and deserialization, which can introduce overhead.

Strangler Fig

Over the time, as new functionally is added to the existing system, the system architecture can become very complex and difficult to extend and maintain. 

The main idea of the Strangler Fig pattern is to build up a new system by gradually migrating the functionality of the legacy system. It's named after a type of tree that grows on others and eventually overtakes and replaces them.

The pattern implementation typically involves the following steps:

  1. Analyze the legacy system and define clear boundaries for decoupling components.
  2. Add a facade that accepts requests for the legacy system. It will direct these requests to either the legacy system or to the newly implemented components. 
  3. Gradually migrating the functionality of the legacy system to the new system.

No alt text provided for this image

Advantages:

  • Minimizes the disruption of existing functionalities.
  • Ensures a gradual transition from a legacy system to the new implementation.
  • Provides improved scalability and maintainability.
  • Provides the flexibility to adopt new technologies and architectural solutions for the new system.

Challenges:

  • Having both new and legacy components might result in increased complexity of overall system and introduce additional overhead and potential performance impacts.
  • Integrating new components with an existing legacy system can be a challenging.
  • Can result in increased development and maintenance overhead. 
  • Full migration can take significant amount of time.


Leader Election

The Leader Election pattern serves as a distributed computing strategy in which a group of instances elects one of them as the leader for managing and coordinating the actions of the group. This ensures that the instances work together smoothly, avoiding conflicts, resource disputes, and disruptions. Usually, if the leader fails, a new one is elected to take over this responsibility.

This pattern is fundamental in various distributed systems. For example in a distributed database system, the leader may be responsible for managing distributed locks, coordinating transactions and ensuring data consistency. The leader can prevent conflicts by deciding which nodes are allowed to make changes to the database at a given time.

Another example is load balancing on Clustered Web Servers, where the leader is responsible for distributing the incoming requests evenly among the servers.

Advantages:

  • Ensures consistency by ensuring that operations are performed atomically, in the correct order.
  • Provides a centralized mechanism of control in a distributed system.
  • Leader election can be used for load balancing the workload.
  • Prevents conflicts.
  • Provides fault tolerance - if the leader fails, the system will quickly elect a new leader

Challenges:

  • A new leader election can be challenging, particularly if the former leader's state needs to be transferred to the new leader.
  • In in large and dynamic distributed systems implementing leader election algorithms can be intricate.
  • The leader node can become a single point of failure — if it fails, the entire system can be disrupted until a new leader is elected.
  • Nodes must consistently communicate with one another to participate in leader election, which can introduce additional overhead.


Static Content Hosting

The Static Content Hosting pattern segregates the static content like multimedia, HTML, CSS, downloadable documents and other assets from dynamic content by strategically distributing it across multiple geographic locations to ensure optimized and swift access.

In typical cloud hosting environments the application's assets and static content can be easily persisted behind a storage service. The storage service would handle requests for these resources and help minimizing the content loading time. Additionally, this can help in reducing the hosting expenses for websites and applications that contain same static resources. One well known example of implementing this pattern is the Content Delivery Network (CDN).

Article content
Static Content Hosting

Advantages:

  • Provides various levels of caching, for example on the client side and on intermediate servers such as CDN.
  • Can be delivered directly to the user's browser without undergoing server-side processing, reducing latency and improving page load times.
  • Static content can be distributed via CDNs, where content is cached on servers across the globe. This reduces the physical distance between the user and the content.

Challenges:

  • Maintaining static content can be difficult if the content is large. It's modifications may require manual intervention, making maintenance more labor-intensive.
  • Static content hosting might be not suitable for applications that rely heavily on user interaction (i.e. social networking platforms) due to its inability to handle dynamic interactive features.
  • Each request for static content is a data transfer, which can potentially increase bandwidth usage, especially if the content is not cached effectively.


Gateway Aggregation

Frequent communication between the client and the backend can negatively impact the overall performance and scalability of the application. Microservice architectures exacerbate this problem because applications made up of many smaller services inherently require more inter-service calls.

Article content


The gateway aggregation pattern utilizes a gateway to combine multiple separate requests into a unified request when a client needs to initiate multiple calls to different backend systems to perform a specific operation.

Article content
The gateway aggregation pattern

The gateway serves as a single entry point for the application to interact with the backend services. It acts as a reverse proxy, receiving incoming requests from the application and forwarding them to the appropriate backend services. It combines the data and responses from these backend services into a single response and sends it back to the application. This aggregation may include combining data from services, filtering, or composing responses to fulfill the client's request.

Advantages:

  • Reduce the number of requests between application and services.
  • Simplifies client side development.
  • Can provide a centralized authentication and authorization.
  • Can incorporate load balancing mechanisms to evenly distribute incoming requests across multiple backend services.

Challenges:

  • The pattern can increased the complexity of the system.
  • If one or more services take a long time to call, this may cause time out or delay the response to application.
  • The gateway may introduce a bottleneck, a single point of failure or add a service coupling between the backend services.


Gateway Routing

The Gateway Routing pattern allows directing requests to multiple services or instances through a single endpoint. It can help in scenarios where clients need to interact with multiple services or multiple instances of a service, or multiple versions of the same service. Frequent changes to any of these will correspondingly result in frequent client updates.

Implementing a gateway in front of applications, services, or deployments by utilize application layer routing will help to direct requests to the appropriate instances. This allows clients to communicate with a single endpoint, simplifying client applications and mitigating the need for constant updates when changes occur.

Article content
Gateway Routing


Advantages:

  • Provides an unified entry point to clients for accessing multiple services without knowing of the specific details of each service.
  • Provides dynamic request routing based on different criteria such as request type, parameters or headers, supporting features such as load balancing, versioning and A/B testing.
  • Provides the ability to deploy multiple versions of a service in parallel, which can be useful for blue-green deployment scenarios.
  • Can optimize latency and improve availability by routing client requests to services in multiple regions for improved performance and resiliency.

Challenges:

  • It can introduces additional complexity to the architecture and can become a single point of failure.
  • Can introduce a bottleneck to the system.
  • Implementing a gateway can slow down the system performance due to additional routing and processing logic, potentially impacting response time and throughput.


External Configuration Store

Many application runtimes contain application configurations files. Any configuration changes required after the deployment will require redeploying the application that can result in downtime and operational overheads.

The External Configuration Store pattern is used to manage application configuration settings outside of application code in distributed systems where applications are often deployed across multiple environments and need to be configured dynamically without redeployment.


Article content
External Configuration Store

Advantages:

  • Brings configuration settings outside of application code and helps dynamically updating configuration values without downtime or redeployment.
  • Provides centralized management of configuration settings for the system and makes it easier to manage them across multiple instances or environments.
  • External configuration stores typically are platform-agnostic and support a wide range of programming languages and frameworks.

Challenges:

  • Implementing External Configuration Storage may require an additional interface for managing the store and a mechanism to ensure consistency between configuration data and applications' behavior.
  • System availability and performance may become dependent on the external module. If the configuration store requires downtime or experiences performance issues, the functionality of the entire system can be affected.


Gateway Offloading

Common cross-cutting concerns such as logging/monitoring, rate limiting, certificate management, authentication/authorization and etc. can be challenging to implement and manage across large numbers of deployments.

Gateway Offloading pattern offers offloading certain tasks or responsibilities from the core services to a specialized gateway proxy component.

Article content
Gateway Offloading pattern

Advantages:

  • Streamlines services development as it doesn't require distribution and management of cross-cutting concerns.
  • Simplifies and standardizes the services configurations.
  • Provides better control and visibility over the entire system by facilitating easier monitoring, auditing, and maintenance of security-related resources.

Challenges:

  • Because the gateway is a central component that performs various tasks, it can become a single point of failure.
  • Testing interactions between services and the gateway can be challenging, inc cases if the gateway performs transformations or request/response manipulation.
  • The gateway can be a performance bottleneck, especially if it is not designed to handle high loads efficiently.

#clouddesigpatterns #cloudcomputing #designpatterns #distributedsystems #computernetworking #networking #network #systemdesign #PipesAndFilters #StranglerFig #LeaderElection #StaticContentHosting #GatewayAggregation #GatewayRouting #ExternalConfigurationStore #GatewayOffloading

To view or add a comment, sign in

More articles by Arthur Sergeyan

Others also viewed

Explore content categories