The importance of testing infrastructure

In the beginning

I've been developing software for 20 years. I remember that in my first "real" job, software testing was done when you were creating an application. For example, you'd create a login screen, you'd try few combinations - valid combination, incorrect combination, missing fields - and that was it. There was no unit testing, integration testing or any other type of testing.

"Blasphemy!!!" you must be thinking.

Truth be said, we didn't have continuous integration nor continuous delivery. Changes were minimal and the testing was done by the users. Whenever they would find an error, we would fix it and deploy it to our customers.

And nowadays

Move forward to today and testing became an essential part of software development and, nowadays, it would be unacceptable to have a piece of software being deployed without being properly tested. Software testing allows to identify and fix bugs before the software becomes operational, reducing the risk of failure. Among other things, it allows companies to have "Continuous Integration", "Continuous Deployment" and "Continuous Delivery".

The rise of the cloud

In the mid 2000s, Amazon (re-)launched AWS (Amazon Web Services), later, in 2010, Microsoft launched Azure and, in 2012, Google launched GCE (Google Cloud Engine). With them, a new concept was born: Infrastructure as Code (IaC). This allowed DevOps engineers to create the infrastructure writing code, similarly to how developers create applications. There are several tools that allow us to do this, for example, Ansible, Chef, Puppet or Terraform, just to name few.

This allows us to have the infrastructure code in a version control, we can integrate it as part of a CI/CD pipeline and we can test it.

Should we spend time testing infrastructure?

Short answer: yes.

Longer answer: definitely yes!

A slightly better answer. I believe that infrastructure testing will become one of the most important aspects in software development. Yes, that's correct: software development.

When we create a new API, we don't deploy it in an existing server that's in a "typical" data centre. We deploy that API in a Docker cluster, with a load balancer and some way of data persistence. This means that a new application will have new "hardware" and we need to guarantee that it is exactly what we need, that all necessary ports are open and the remaining are closed, and we need to ensure it is secure.

There are companies that have as part, of their CI/CD pipeline, procedures to determine if what's being deployed is compliant with the company security policies. This eliminates the need to have someone going over the infrastructure that was just deployed, because we know that it was already tested.

Some light reading


I see what you're saying Hugo. With the pipeline, a flaw in one part of the system would not necessarily be the cause of a flaw in another part of the system. Nice article.

To view or add a comment, sign in

More articles by Hugo Paredes

  • When You Cannot Trust Documentation

    For work, I needed to investigate how to deploy a PostgreSQL Flexible Server with geo-replication, using Terraform…

Others also viewed

Explore content categories