What is best practice in web application deployment?

What is best practice in web application deployment?

How many deployment environments do you think you need to properly test all the changes in your pipeline? Three, four... A dozen?

For at least the last decade, the best practice process of getting web applications from the developer's environment to the production environment has suffered due to a fundamental limitation imposed by resources, where a limited number of development environments constrain the ability to test and deploy. This applies to anything that lives on a server and talks to the Internet, including what we call websites, CMSs, and web applications. Continuous Integration and Deployment changes that, but not all CI&D tools are created equal…

The Four-Tier deployment model

Every web developer should familiar with the Four Tier deployment model of Development, Testing, Staging and Production. In most places, this is the “standard” for building, testing, and serving web applications, and looks like the following:

  1. Development: This is where developers make changes to code, and is usually a local, single-tenant environment (e.g. a developer’s laptop).
  2. Testing: This is an integration environment where developers merge changes to test that they work together. It may also be a Quality Assurance or UAT environment.
  3. Staging: This is where tested changes are run against Production-equivalent infrastructure and data to ensure they will work properly when released.
  4. Production: This is the live production environment.

This model has been around for a while and is often held-up as a kind of best practice for deployment architectures. It has a number of problems, however...

The Four-Tier model arose from a particular historical confluence of increasing complexity in web application design, testing, and packaging, and physical constraints on computing infrastructure. As software increased in complexity, developers started using more complex packaging methods for deploying that software. This enabled us to start breaking down the deployment model into a series of steps that more closely matched the kinds of testing that were required for complex applications. These steps became our actual environments. We started moving code through these tiers, with each tier professing to offer some kind of guarantee as to the increasing consistency of the data and environment, and the quality of the code.

At the same time, however, the ability to manage the deployment of that same software was constrained by the cost and difficulty of acquiring and managing computing resources (i.e. servers) to serve environments. If you wanted a new environment to test code, you had to buy it, build it, maintain it, and find a way to deploy to it. As a result, most development teams maintained the absolute minimum number of environments or servers necessary to meet their own workflow requirements. In a lot of cases, this was actually less than four, and sometimes as little as two (or one, if you did your development directly onto your production server).

Obviously, cutting back on environments makes it very hard to know if you are testing and deploying code safely and reliably, but even with four environments there are going to be challenges:

  • Code merging must be done at the Development tier, which leads to conflicts through a lack of visibility.
  • Changes can’t be easily tested in isolation, which makes tracking down and verifying those changes harder (this is a problem when you have more changes than integration environments, so usually starts at two changes!)
  • Unless those doing QA are technically adept, changes must undergo testing in shared environments. This can cause those environments to become blocked very easily and create issues with rework.
  • Broken test environments or changes take out testing for all changes, not just the one requiring rework.
  • Failure to keep environments up-to-date with Production leads to out-of-date testing data, incorrect operating system dependencies and other environmental factors, which can cause Production deployments to fail, requiring expensive and embarrassing rollbacks.
  • Version control offers us the ability to isolate changes, however, limited environments for testing nullify that benefit and force us to merge changes together early, which causes conflicts and bottlenecks.

Ultimately, the principal problem is that when you have a limited number of servers to deploy to, the chance of any one of those becoming blocked with broken code goes up significantly.

So how do we solve this problem? You can read the full blog at Platform.sh...


To view or add a comment, sign in

More articles by Christopher SKENE

Others also viewed

Explore content categories