Azure DevOps and Terraform... An Introduction

Azure DevOps and Terraform... An Introduction

Times have changed greatly over the past few months, and while both my lifestyle and my workday look very different than they did in February, I am grateful that I work in I.T. and have been able to continue to work without much disruption.

In the past six months I have been exposed to new technologies, or at least new to me. An important part of my role as a DevOps consultant is to understand the different products, tools and providers on the market. While I spent my first 2.5 years in Version 1 focusing on AWS, the principles of well-architected are cloud-agnostic. I now spend my time between AWS and Azure and find my experiences in each are transferable. In this day and age, multi-cloud is the preference for most organisations and it is now a necessary part of my role to have expertise in more than one cloud.

One "new" tool that I have been spending alot of time using of late is Azure DevOps. Previously known as Visual Studio Team Services (VSTS), Azure DevOps is Microsoft's SaaS suite of DevOps tools - including hosted git repositories, artifact stores and CI/CD pipelines. It operates in the same space as the AWS Code* range of products as well as open source combinations like BitBucket, Jenkins, Jira and others.

Having most recently used the AWS Code* tools in my previous projects, it took me a while to get used to Azure DevOps (AZDO). After the initial onboarding, however, I have found it very useful and there are many pleasant surprises I have come across while learning to use it. On the Cloud Platform team in Version 1, Terraform is our primary tool of choice for Infrastructure as Code (IaC). The appeal of Terraform is that it is cloud agnostic, it has an ever growing user base and community and it has rapid support for new services and features (sometimes faster than the vendor native tools). I am going to share some of the lessons that I've learned using AzDO with Terraform.

Open SaaS Product

Something which is obvious to me now, is that AzDO is completely separate product to the Azure Portal. Clearly the two play together nicely (more on that shortly), it just confused me at first because I was used to logging into a single AWS portal for DevOps tools as well as all of Compute, Storage and other services. Instead of a single login, AzDO is accessed via dev.azure.com, while Azure is found at portal.azure.com.

AzDO is a Microsoft product which comes with the usual benefits of Azure Active Directory for single sign on and RBAC. What impressed me most about AzDO, however, is how flexible it is. You can go 'all in' with AzDO and leverage their git hosting, pipelines, boards and artifact stores, or you can easily mix-and-match with your existing tool chain. AzDO is also both platform and cloud agnostic - Linux, Windows and Mac are all supported platforms, most major programming languages can be used and continuous delivery is supported to AWS and GCP as well as to Azure.

Everything as code

As I mentioned already, Terraform is our tool of choice when it comes to codifying infrastructure. I was very pleased to learn that AzDO has a Terraform provider which allows us to write our DevOps tooling as code, in addition to our pipelines as code as well as infrastructure as code.

The main benefit of this is that everything is together and version controlled. In a single repo we can have everything related to it:

  • code to build and update the repo
  • The CI and/or CD pipelines triggered by the repo
  • The documentation for the solution
  • The solution code itself

This is what a terraform repo might look like:

.
├── .azurepipelines         # Yaml pipeline files
├── .azurerepo              # Terraform to create this repo and policies
├── README.md               # Descriptive reamde
├── docs                    # Documentation for this project
├── main.tf                 # Terraform files
├── outputs.tf
└── variables.tf


3 directories, 4 files

Terraform Provider

Everything that's templated in Terraform can be easily re-used across solutions and environments. For example, the below snippet of Terraform code creates a git repository and a CI pipeline that will trigger on commits to master. By changing a single variable ('repo_name'), this code can be used to have a standard configuration across all projects.

resource "azuredevops_git_repository" "repository" {

  project_id = data.azuredevops_project.project.id
  name       = var.repo_name

}


resource "azuredevops_build_definition" "build" {

  project_id = data.azuredevops_project.project.id
  name       = format("%s-pipeline", var.repo_name)
  path       = "my-pipelines"


  repository {
    repo_type   = "TfsGit"
    repo_id     = azuredevops_git_repository.repository.id
    branch_name = "refs/heads/master"
    yml_path    = "azure-pipelines.yml"
  }


}


Branch Policies

Another nice feature of AzDO is the ability to easily create policies to protect and validate git branches - this is not always straightforward, or even possible, with some other source control services on the market. I found this to be very useful for the below reasons:

  • Protecting direct commits to master branch
  • Automating pull request validation before merging
  • Maintaining consistency in your branch history by enforcing a merge strategy when a pull request completes

Again the Terraform provider supports branch policies and build validations. This example runs a validation pipeline for pull requests to the master branch.

resource "azuredevops_branch_policy_build_validation" "example" {

  project_id = data.azuredevops_project.project.id

  enabled  = true
  blocking = true #block direct commits to the branch


  settings {
    display_name        = "Trigger vailidation pipeline."
    build_definition_id = azuredevops_build_definition.build.id
    valid_duration      = 720


    scope {
      repository_id  = azuredevops_git_repository.repository.id
      repository_ref = "refs/heads/master"
      match_type     = "Exact"
    }
  }
}


Yaml Pipelines

Yaml format for pipelines in AzDO was introduced as a replacement for the "classic" UI Build and Release pipelines. Using this format allows users to leverage the same features as the visual designer, but with a yaml file that can be managed like any other code. This means that we can avail of all of the benefits of source control - including versioning and collaboration for our pipeline definitions. The yaml pipeline file, 'azure-pipelines.yml' by default, is added to the root level of the repository.

The below snippet is a simple example of a "Hello World" yaml pipeline.

trigger:
- master


pool:
  vmImage: 'ubuntu-latest'


steps:
- script: echo "Hello, world!"
  displayName: 'Run a one-line script'


- script: |
    echo "Hello World!"
    echo "I am a pipeline."
  displayName: 'Run a multi-line script'


Pre-Built Tasks

The Visual Studio Marketplace has a wealth of extensions available for AzDO, including pipeline tasks. These tasks are pre-built steps to include in your pipeline for actions such as managing Azure AppService (start, stop, deploy commands) and running bash or powershell scripts. There are a number of Terraform tasks available in the Marketplace which help manage some of the more advanced steps in configuring Terraform - such as remote backend management.

The below example leverages the Microsoft Labs Terraform task to install Terraform 0.12.26, initialise a remote backend in an Azure Storage Account and then run a `terraform apply` command using a custom var file.

trigger:
  - master

pool:
  vmImage: 'ubuntu-latest'
      
variables:
  backend_key: 'example.tfstate'
  backend_rg: 'myresourcegroup'
  backend_storageaccount: 'mytfsa'
  backend_container: 'tfstatez'
  tf_dir: '.'
  tf_version: '0.12.26'
  
steps:  
  - task: TerraformTaskV1@0
    displayName: Initialise Terraform and backend
    inputs:
      provider: 'azurerm'
      command: 'init'
      displayName: Terraform Init
      backendServiceArm: $(backend_sub)
      backendAzureRmResourceGroupName: $(backend_rg)
      backendAzureRmStorageAccountName: $(backend_storageaccount)
      backendAzureRmContainerName: $(backend_container)
      backendAzureRmKey: $(backend_key)
      workingDirectory: $(tf_dir)


  - task: TerraformTaskV1@0
    displayName: Validate Terraform
    inputs:
      provider: 'azurerm'
      command: 'validate'
      displayName: Terraform Validate
      workingDirectory: $(tf_dir)


  - task: TerraformTaskV1@0
    displayName: Apply Terraform Plan
    continueOnError: true
    inputs:
      provider: 'azurerm'
      command: 'apply'
      commandOptions: '-var-file=example.tfvars'
      environmentServiceNameAzureRM: $(tf_sub)
      workingDirectory: $(tf_dir)

Tip: One thing I found tricky was finding the parameters required for some of the marketplace tasks. The visual designer can be a great help for figuring that out, populating the yaml block for you:

No alt text provided for this image


Environments and Approvals

One of the common use cases for CI/CD pipelines is deploying an artifact or solution to multiple environments. The "classic" Azure Pipelines used a special Release pipeline for deploying your code to multiple places. In the yaml format, the concepts of builds and releases can all be performed in a single pipeline definition.

Yaml pipelines use a special "deployment" job type that can specify the environment it's targeting. It makes sense to add an environment to the `terraform apply` stage which is when resources get built.

  - deployment: Build_Environment
    displayName: Build Environment
    environment: "sandbox"
    strategy:
        runOnce:
        deploy:
            steps:
            - task: TerraformTaskV1@0
            displayName: Apply Terraform Plan
            continueOnError: true
            inputs:
                provider: 'azurerm'
                command: 'apply'
                commandOptions: -var-file=sandbox.tfvars
                environmentServiceNameAzureRM: $(tf_sub)
                
                workingDirectory: '.'
                

By using an environment in this way, we can trace the history and status of an environment across all of our repositories and pipelines, similar to release pipelines.

No alt text provided for this image

An added benefit of using environments in this way is the approval functionality. While the Yaml pipeline format does not explicitly support approvals or manual steps, these can be easily added to the environment that the pipeline deploys to. The pipeline will wait for approval condition to be met before running any jobs with the environment set. Approval condition can be manual checks, Azure Function triggers or other REST API calls.

No alt text provided for this image


TIP: Build Permissions

One thing that caught me out when setting up my first pipelines for Terraform was the build permissions. Typically we modularise our Terraform code so that a `terraform init` pulls modules form other repositories. To get that to work in Azure Pipelines, I had to do some googling. I found that the pipeline needs permission to access other repositories, and Terraform needs git to be configured to use the build agent access token.

  1. Grant "git contribute" permissions to the Build Service Account in the AzDO Project Settings
No alt text provided for this image


2. Set git credentials to use System.AccessToken in your pipeline before running Terraform

  - script: |
       git config --global http.https://dev.azure.com/${AZDO_ORG}.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
    displayName: Set Git Credentials for Job

That's all... for now

That's a very high level overview of some of the functionality that Azure DevOps has to offer Terraform engineers. There is so much I haven't touched on here, including Pipeline Stages, Variables (and Groups), Pipeline Templates, Triggers and more. If you're interested in what you've read here, or what's been left out - please let me know.

To view or add a comment, sign in

More articles by Shane Mitchell

  • AWS Re:Invent 2018 - My First Trip

    This is my second LinkedIn blog post. The first was written about my first trip to the AWS Summit London in 2017.

    7 Comments
  • AWS Summit London: a first visit

    Since joining the Version 1 SuperGrad programme four months ago, I have learned a lot, particularly about cloud. Prior…

    2 Comments

Others also viewed

Explore content categories