Software Provisioning with Terraform

Software Provisioning with Terraform

There are two ways to install and configure software on instances.

1.    Build custom AMI and do desired configuration. Tool – Packer

2.    Spin standard AMIs and install and configure software using tools like Chef, Ansible, Puppet, Salt or Shell scripting.

In this article we will talk about second approach with Shell scripting.

Steps:

1.    I will use “vscode-terraform” as editor for terraform script.

https://marketplace.visualstudio.com/items?itemName=mauve.terraform

2.    Follow previous article to set up AWS EC2. Copy code.

3.    We shall use file upload provisioner to upload script file. File upload provisioner requires connection to EC2. We can use default user id or public/private key.

-      First generate public and private key using KeyGen. For this sample I am not entering passphrase when asked.

$ ssh-keygen -f mykey

-      Finally file upload provisioner code will look like as follows.

File name: instance.tf

resource "aws_key_pair" "mykey" {

  key_name = "mykey" #$ ssh-keygen -f mykey   #it will generate public and private key

  public_key = "${file("${var.PATH_TO_PUBLIC_KEY}")}" #public key upload to aws

}

 

resource "aws_instance" "example" {

  ami = "${lookup(var.AMIS, var.AWS_REGION)}"

  instance_type = "t2.micro"

  key_name = "${aws_key_pair.mykey.key_name}" #letting AWS know which public key need to be install

 

  provisioner "file" {  # upload file

    source = "script.sh"

    destination = "/tmp/script.sh"

    /* login using user id and password. Best practice is to login using private and public key

    connection{

      user = "${var.instance_username}"

      password = "${var.instance_password}"

    }

    */

  }

  connection {

    user = "${var.INSTANCE_USERNAME}"

    private_key = "${file("${var.PATH_TO_PRIVATE_KEY}")}" # private key to login

  }
}

4.    Once file is uploaded it can be executed using “remote-exec” provisioner. 

provisioner "remote-exec" {

    inline = [

      "chmod +x /tmp/script.sh", # giving executable(+x) rights

      "sudo /tmp/script.sh"

    ]

  }

5.    Let's open port for file transfer. Got to default security group and allow your ip for tcp connection.

6.    Run 'terraform apply'. Once it completes copy ip to browse.

7.    Browse ip to see Nginx is running.

8.    Don’t forget to destroy.

$ terraform destroy

Git Code:

Further reading:

Manoj Kumar I checked your article previously, that's the basic I understood but my use case is different can you create a article on that plz. The ec2 instance must be provisioned with autoscaling group and I want to copy a any file from ec2 instance to s3

Like
Reply

Hi bro can you help me in following Use Case : Using terraform, create the following resources in AWS EC2 instance S3 Bucket echo some (ie ami-id, hostname etc) instance metadata to a file and copy that file to the s3 bucket. The terraform command to create the resources should look as follows - terraform apply –var 'aws_access_key_id=<your access key>' –var 'aws_secret_access_key=<your secret access key>' -var 'bucket_name=<S3 Bucket name>' Constraints The whole project should be contained within a single file Do not create any new VPC , Should use the default one provided in the region. Do not use the aws_instance resource provided by terraform, rather, make use of autoscaling groups – Should be configurable for eg: no of instances min, max, instance type etc. Do not hard code any aws credentials to source. No aws access key id or secret access key information is to be present on the ec2 instance The echo and pushing the file to S3 bucket should happen when the instances are provisioned and not login to the aws instance with any key. i.e. no key pair to be created for the instance . Access to the S3 bucket from the EC2 instance should be done via Instance Roles

Like
Reply

To view or add a comment, sign in

More articles by Manoj Kumar

Others also viewed

Explore content categories