Persistent workflows on transient "serverless" functions

Persistent workflows on transient "serverless" functions

How we can use Azure "Serverless" functions, CQRS and event sourcing to put together a persistent workflow using transient "serverless" Azure functions.

Azure Serverless Functions are a technology that allows you to perform small functions without the cost and maintenance overhead required in provisioning, spinning up and maintaining the servers on which the function will run. They are a competitively priced technology which is purely pay-per-use which makes them ideal for workloads that have a highly variable demand profile where you don't want to pay an overprovisioning cost in slack periods. They are also bound to a specific trigger which makes them a useful way of creating reactive or responsive systems.

That being said, most useful workloads actually have some stateful component to them and operate in a workflow manner with step 1 followed by step 2 and so on. There is a new library that introduces durable functions which can be chained together but another possibility is to keep the functions themselves stateless but have them interact with a storage account into which the workflow state is persisted in the form of an event stream.

Each function is then responsible for just one step in the processing of the command and it has to be written in such a way as to guarantee idempotency - it first has to check that the underlying event stream for the command workflow is in the state it needs it to be in before it can run.

Event types in the command workflow

The following event types are defined that can happen to a command workflow instance in its life cycle.

  1. Command Created - The first step in the workflow. This event has the details of the command invocation such as who invoked it.
  2. Parameter Set - A property value to be used when executing the command was supplied.
  3. Validation Error Occurred - There was some validation condition that prevents the command from continuing. For example a parameter might be invalid or a user permission level may not be sufficient to continue.
  4. Validation Succeeded - There were no issues with validation so the command can be executed
  5. Command Completed - The function part of the command was performed.

You can use these event types to run a projection which will tell you what the current state of the command workflow is at any given point in time. For more complex systems you might even add more event types if your command workflow is more complex.

First log the command

When a new command is triggered the first job of our command workflow is to log it. This creates a new unique event stream for that command which we identify with a globally unique identifier.

In this system we receive all commands as specific Azure Event Grid Custom Topics:

The contents of the event grid custom topic message are then turned into the command properties and parameters and are appended onto the command event stream.

Then we trigger the next step in the chain and pass it the unique identifier of the command instance only.

Validating the command

The next step is to validate the command. First we would run a projection over the command event stream to make sure that the command is in a state in which it makes logical sense to perform the validation (i.e. it has not already been completed etc.) and, if so, perform the validations that are required for the command.


If any validation errors are present a Validation Error Occurred event is written for each one, or if none do then a Validation Completed event is written,

Then we trigger the next step in the chain and pass it the unique identifier of the command instance only.

Performing the command

The next step is to perform the actual "work" of the command.  First we would run a projection over the command event stream to make sure that the command is in a state in which it makes logical sense to perform the command (i.e. the command has been validated but has not already been completed).

If it is then the action is performed and a Command Completed event is appended to the command workflow event stream.

Why do it this way?

By having each step of the workflow absolutely idempotent you can test them independently and run them over any command without worrying that they might cause the workflow to be put in an invalid state.

In addition you would only pay for the compute power used in processing the command workflow itself - you do not pay for any "listening" or "waiting" state processes. This makes it a very frugal way to engage in cloud computing.

The source code relating to this article can be found in this GitHub repository.

To view or add a comment, sign in

Others also viewed

Explore content categories