Apply 12-Factor Principles on Application in Cloud Foundry

Apply 12-Factor Principles on Application in Cloud Foundry

In this article we will discuss how you can adopt 12-Factor principles for your applications targeted to be deployed on Cloud Foundry.

We will start by defining what is Cloud Foundry and 12-Factor application, then we will discuss how to implement 12-Factor principles in your application, finally we will discuss what factors are most important to follow.

1.     What is Cloud Foundry?

Cloud Foundry is an open-source platform as a service (PaaS) platform. It provides container-based platform for application deployment. Developer only needs to deploy their code using simple push command on command line. Platform takes your code, build it, and deploy seamlessly on the underlying containers. Platform provides the features such as building and deploying the code, application runtime, routing, load balancing, scaling, monitoring, log management, and backing services in the form of service broker.

Following diagram shows the platform evolution from traditional IT to PaaS Platform. 

Figure 1: Evolutions of platform: source Cloud Foundry (2021)

No alt text provided for this image

With PaaS platform like Cloud Foundry, developer only need to worry about their data and application. Platform takes care of underlying concerns such as runtime, middleware, operating system, virtualisation, server, storage, and networking. 

Cloud Foundry build your code using Buildpacks. Cloud Foundry has Buildpack support for popular development languages such as Java, .Net, Node.js, Go, Phyton, PHP, Ruby etc. Cloud Foundry also support community provided Buildpacks.

Cloud Foundry provides the backing services to an application deployed in the platform using service broker. These backing services typically runs outside of Cloud Foundry platform. Developer could use backing services like database, caching or could use the public cloud provider services like AWS S3, RDS etc. Developers create service instance of those backing services and bind those instances to their application. Cloud Foundry runtime provides the required service configuration to the application at runtime to interact with backing service. 

Cloud Foundry provides true self-serve agile infrastructure which is the critical part of cloud native architecture. Developers could spin-up or tear down environments (Dev/Test) without involvement of application or platform teams. This massively speeds up the development time and speed to market. 

Due to these added advantage and features, Cloud Foundry serves as a best platform for microservice deployment.


2.     Twelve-Factor (12-Factor) Application

The 12-Factor (2017) app is a collection of patterns for cloud-native application architectures, originally developed by engineers at Heroku. These patterns describe an application archetype that optimizes for the “why” of cloud-native application architectures. 

They focus on speed, safety, and scale by emphasizing declarative configuration, stateless/shared-nothing processes that horizontally scale, and an overall loose coupling to the deployment environment.

Cloud native platforms like Cloud Foundry, Kubernetes, Heroku, and Amazon Elastic Beanstalk are optimized for deploying 12-Factor applications.

In the context of 12-Factor, application refers to a single deployable unit. Organizations will often refer to multiple collaborating deployable as an application. In this context, however, we will refer to these multiple collaborating deployable as a distributed system.

Let’s look at each of these factors and see its implementation in Cloud Foundry. 

2.1     Codebase

Each deployable application is tracked as one codebase in version control. It may have many deployed instances across multiple environments.

There is always a one-to-one correlation between codebase and the application:

  • If there are multiple codebases, it’s not an application – it’s a distributed system. Each component in a distributed system is an application, and each can individually comply with 12-Factor
  • Multiple applications sharing the same code is a violation of 12-Factor. The solution here is to factor shared code into libraries which can be included through the dependency manager

You should have a separate repository for each application’s code. When you deploy your code to Cloud Foundry, you should deploy your code using an automated CI/CD pipeline which should be executed on every code commit. Pipeline should build the code, run the unit, functional, integration and security scans, and finally deploy your code to Cloud Foundry by invoking push command in pipeline. 

2.2     Dependencies

An application explicitly declares and isolates dependencies via appropriate tooling (e.g., Maven, NuGet, Bundler, NPM) rather than depending on implicitly realized dependencies in its deployment environment.

In your Spring Boot application, you should isolate dependency using the Maven or Gradle. In your .Net application, you should use NuGet packages.

When Cloud Foundry build your code, Buildpack injects those dependencies into your code at build time. Cloud Foundry creates the deployment package of your code called droplet and deploy in one of the containers. These containers include all the dependencies required to run your application in an isolated environment.  

2.3     Config

Configuration or anything that is likely to differ between deployment environments (e.g., development, staging, production) should be injected via operating system-level environment variables.

Applications sometimes store configs in the code which is the violation of 12-Factor. A good litmus test for whether an application has all config correctly taken out of the code is whether the codebase could be made open source at any moment, without compromising any credentials.

Violation of this principle include:

  • Storing configuration as constant in the code
  • Configuration files stored alongside the code

In the Cloud Foundry, you should store credentials in the environment variables (using a command line or manifest file) and access those variables from your code. Cloud Foundry also passes credentials for services binded to your application in the special environment variable called VCAP_SERVICES.

Almost all programming languages supports accessing the environment variables in the code.

You should secure the sensitive information in the configurations such as credentials. You could use external services such as Credhub or HashiCorp Vault, and bind those services’ instance to your application to secure the configurations stored in environment variables.

2.4     Backing Services

Backing services, such as databases or message brokers, are treated as an attached resources and consumed identically across all environments.

The code for a 12-Factor application makes no distinction between local and third-party services. To the application, both are attached resources, accessed via a URL or other locator/credentials stored in the config.

The 12-Factor application should be able to swap out a local MySQL database with one managed by a third party (such as Amazon RDS) without any change to the application code. Likewise, a local SMTP server could be swapped with a third-party SMTP service without code changes. In both cases, only the resource handle in the config needs to change.

Resources can be attached to and detached from application at will. For example, if the application’s database is misbehaving due to a hardware issue, the application’s administrator might spin up a new database server restored from a recent backup. The current production database could be detached, and the new database attached – all without any code changes.

Benefit of backing services is that you do not need to change your code or config when you deploy across environments. For example, you developed your code to use MongoDB in development environment. You simply bind your application to MongoDB in your development environment using backing service. When you deploy your code to production, you then bind your application production instance of MongoDB on production.

Various backing services are available in the marketplace of Cloud Foundry such as MySQL or Redis. These services are typically deployed outside of Cloud Foundry platform. You should create the services instance of backing service and bind those services to your code. 

Cloud Foundry provides the runtime configuration to your code in the special environment variable called VCAP_SERVICES. You simply need to read the configurations such as connection string, credentials, URLs etc., from this environment variable and connect to the backing services. 

You can attach/detach backing services via command line or the application manifest file. You should configure and bind backing service using CI/CD pipeline.

2.5     Build, Release, Run

A codebase is transformed into deploy through three stages:

  • Build – the build stage converts a code repo into an executable bundle known as a build. Using a version of the code at a commit specified by the deployment process, the build stage fetches dependencies and compiles binaries and assets. Example of a build is jar file in Java or dll in .net
  • Release – the release stage takes the build produced by the build stage and combines it with the deploy’s current config. The resulting release contains both the build and the config and is ready for immediate execution in the execution environment
  • Run – the run stage runs the app in the execution environment, by launching some set of the app’s processes against a selected release.

The 12-Factor application uses strict separation between the build, release, and run stages. For example, it is impossible to make changes to the code at runtime, since there is no way to propagate those changes back to the build stage.

Cloud Foundry takes build, release, run seriously and provides the strict separation between these stages. 

When you push the application (e.g., jar using push command), Cloud Foundry perform the action in three stages:

  1. Build – invokes the build process first by invoking the appropriate buildpack for your application, for example, Java buildpack for Java application. It fetches the dependencies and build the application package called droplet
  2. Release – deploy the droplet (deployment package) along with the configs (environment variable) into the application container ready to run
  3. Run – start the application and constantly performs the health checks 

2.6     Processes

The app executes as one or more stateless processes (e.g., master/workers) that share nothing. Any necessary state is externalized to a backing service such as database.

The 12-Factor application assumes the application to be stateless. Which means that anything cached in memory or on disk will not be available on a future request or job. Each request could be served by different process or service instance of an application. Even when running only one process or service instance, a restart will usually wipe out all local (e.g., memory and filesystem) state.

Some web systems rely on “sticky sessions” – that is, caching user session data in the memory of the application’s process and expecting future requests from the same visitor to be routed to the same process. Sticky sessions are a violation of 12-Factor and should never be used or relied upon.

If application need to maintain the state for a particular use case, it should maintain in the backing services such as Redis.

12-Factor applications should be designed with immutable infrastructure in mind. Service instances could be shut down or restarted on different nodes at any time by the platform. 

In Cloud Foundry, there is no file system since application runs on its own container. If you need to work on the file, use backing services such AWS S3 or use the volume services such as NFS.

When you deploy your application in Cloud Foundry, it deploys your application in the separate process, but you need to ensure that your application/process should be stateless and necessary state should be maintained in the backing services.

2.7     Port Binding

The 12-Factor app is completely self-contained and does not rely on runtime injection of a web server into the execution environment to create a web-facing service. The web app exports HTTP as a service by binding to a port and listening to requests coming in on that port.

Violations of this principle include:

  • An application that relies on the runtime injection of a web server into the execution environment to create a web-facing service. While the Java buildpack in Cloud Foundry does supports the injection of Tomcat to run the WAR files, you should not rely on it. Instead, use Spring Boot to package the web server with your app
  • Applications that have hard-coded ports

Cloud Foundry supports both HTTP and TCP port biding via the $PORT environment variable which is passed to the application container at runtime. Your application should use the $PORT environment variable at runtime to listen to the incoming requests on this port.

2.8     Concurrency

In the 12-Factor application, processes are first-class citizens. Each application instance is a stateless process. To scale, we add processes or application instances.

Concurrency is usually accomplished by scaling out app processes horizontally.

The share-nothing, horizontally partitionable nature of 12-Factor application processes means that adding more concurrency is a simple and reliable operation.

Using this model, the developer can architect their distributed system to handle diverse loads by assigning each type of work to the process type.

12-Factor app processes should never daemonize or write PID files. Instead, rely on the Cloud Foundry’s Diego to respond to crashed processes, and Cloud Controller to handle user-initiated restarts and shutdowns, and Loggregator to manage output streams.

In Cloud Foundry, you could scale your application vertically or horizontally using the scale command. You scale vertically to increase capacity e.g., memory, disk size and scale horizontally to increase availability to handle load application could sustain. 

You could configure to auto scale your application in Cloud Foundry based on the matric such CPU or memory usage. For example, we could configure auto scaling as: when CPU usage goes up to 70%, platform spin-up 4 additional instances of services, and when CPU usage goes down to 30%, services will be scaled down to 2 instances.

2.9     Disposability

The 12-Factor application processes are disposable, meaning they can be started or stopped at a moment’s notice. This facilitates fast elastic scaling, rapid deployment of code or config changes, and robustness of production deploys. When Cloud Foundry’s Diego detects a failure instance, it can be started quickly.

Applications should strive to minimize start-up time. A short start-up time provides more agility for the release process and scaling up; and it aids robustness because the Cloud Foundry’s Diego can more easily move processes to new physical machines when warranted.

At platform level, Cloud Foundry minimizes start-up time by creating and caching droplets. At application level, you should design your application with faster start-up and graceful shut down.

An application should not execute any bootstrap script e.g., database setup/migration scripts during the start-up, as it violates the 12-Factor principle. Same applies to the shutdown.

When you push the application in PCF, two timeouts apply:

  • CF_STAGING_TIMEOUT: Controls the maximum time that the cf CLI waits for an app to stage after Cloud Foundry successfully uploads and packages the application. A value set in minutes. The default value is 15 minutes. Note that this is cf CLI environment variable which is different from the application environment variable
  • CF_STARTUP_TIMEOUT: Controls the maximum time that the cf CLI waits for an app to start. A value set in minutes. The default value is 5 minutes. Note that this is cf CLI environment variable which is different from the application environment variable
  • cf push -t TIMEOUT: Controls the maximum time that Cloud Foundry allows to elapse between starting an app and the first healthy response from the app. When you use this flag, the cf CLI ignores any app start timeout value set in the manifest. A value set in seconds. The default is 60 sec. You can set this value either in the push command or in the application manifest file.

 When CF requests a shutdown of your app instance, either in response to the command cf scale APPNAME -i NUMBER-OF-INSTANCES or because of a system event, CF sends the app process in the container a SIGTERM. The process has ten seconds to shut down gracefully. If the process has not exited after ten seconds, CF sends a SIGKILL.

Applications must finish their in-flight jobs within ten seconds of receiving the SIGTERM before CF terminate the application with a SIGKILL. For instance, a web application must finish processing existing requests and stop accepting new requests.

2.10  Dev / Prod Parity

The 12-Factor application is designed for continuous deployment by keeping the gap between development and production small.

In Cloud Foundry, above can be achieved using Orgs and Spaces, which is the logical separations between environments.

Developers sometimes find great appeal in using a lightweight backing service in their local environments, while a more serious and robust backing service will be used in production. For example, using SQLite locally and PostgreSQL in production; or local process memory for caching in development and Memcached in production.

The 12-Factor application resists the urge to use different backing services between development and production, even when adapters theoretically abstract away any differences in backing services. Differences between backing services mean that tiny incompatibilities crop up, causing code that worked and passed tests in development or staging to fail in production. These types of errors create friction that disincentivizes continuous deployment. The cost of this friction and the subsequent dampening of continuous deployment is extremely high when considered in aggregate over the lifetime of an application.

In Cloud Foundry, it can be achieved using the similar type of services available in both non-prod and prod.

Continuous delivery and deployment are enabled by keeping development, staging, and production environments as similar as possible.

2.11  Logs

Logs are the stream of aggregated, time-ordered events, collected from the output streams of all running processes and backing services.

A 12-Factor app never concerns itself with routing or storage of its output stream. It should not attempt to write or manage log files. Instead, each running process writes its event stream, unbuffered, to stdout.

Cloud Foundry’s Loggregator system supports this. From the application code, a developer should only write to the stdout or stderr, and Cloud Foundry platform takes care of these logs by providing a mechanism to stream these logs to the third party or centralized log aggregator like Splunk.

One violation of this principle is writing log messages to a local file or database.

2.12  Admin Processes

The process formation is the array of processes that are used to do the application’s regular business (such as handling web requests) as it runs. Separately, developers will often wish to do one-off administrative or maintenance tasks for the app, such as:

  • Running database migrations
  • Running a console to run arbitrary code or inspect the applications models against the live database
  • Running one-time scripts committed into the application’s repo

One-off admin processes should be run in an identical environment as the regular long-running processes of the app. They run against a release, using the same codebase and config as any process run against that release. Admin code must ship with application code to avoid synchronization issues.

Cloud Foundry supports the execution of one-off processes via tasks. You should run the admin process using task in command line. Task is typically an application which you execute using the task command in Cloud Foundry. Cloud Foundry execute the task for a finite amount of time, then stop it. Cloud Foundry runs task in their own container, provides the minimum resource required to execute the task, and then destroy the container once task is completed. 

Tasks are great way in Cloud Foundry to execute operations such as database migration, sending emails, executing batch jobs, running database scripts for certain tasks, image processing, optimizing indexes, upload data, backing data, and downloading contents. These are many more use case for which you could execute task in Cloud Foundry. 

3.        Final Words

12-Factor principles are backbone of cloud native architecture. Failure to adopt those principles will prevent your application to be easily deployable, resilient, and scalable. You will not get the benefits of self-service agile platforms like Cloud Foundry if you don’t follow these principles.

An application should aim to satisfy all twelve factors as we discussed, but at minimum, it should follow below four factors:

  • Processes
  • Port Binding
  • Disposability
  • Logs


References: 

12 Factor 2017, ‘The Twelve-Factor App’, viewed March 2022, < https://12factor.net>

Cloud Foundry 2021, ‘Cloud Foundry Overview’, viewed March 2022, < https://docs.cloudfoundry.org/concepts/overview.html>



To view or add a comment, sign in

More articles by Irfan Muhammad

Others also viewed

Explore content categories