Terraform Modules

As DevOps engineers and developers many of us use terraform to create and maintain AWS infrastructure.

Before AWS modules were introduced, we used “resources” to create templates for individual deployments, then stored the state file or write a huge template that consists of all components. This has its own disadvantage as the infrastructure creation is not modular.

There is a better way to build terraform code that will build the infrastructure modularly by using terraform modules to write code once and use it for multiple types of VM’s.

Example: if one wants to launch Kafka’s or Mongo vm’s we should be able to pass parameters to launch multiple type of VM’s.

Below is my design and implementation using modular design approach.

Step 1

Create a folder and named “infra”, under infra create a folder where we will create our main code

infra/ec2-instance

I call the folder ec2-instance, and then create a file called main.tf and paste the following code:

provider "aws" {

    region = "${var.region}"

    shared_credentials_file = "/Users/dinar/.aws/credentials"

    profile                 = "${var.profile}"
}

 

resource "aws_instance" "instance" {

  count                  = "${var.instance_count}"

  ebs_optimized          = true

 
  # Use the same base AMI for all instances.

  ami                    = "ami-0bbe6b35405ecebdb"  # ubuntu 18.04 (x86)

  instance_type          = "${var.instance_type}"

  key_name               = "${var.key_name}"

  monitoring             = true

  vpc_security_group_ids = ["${var.security_group_ids}","${module.common_sg.this_security_group_id}"]

  subnet_id              = "${element(var.subnet_ids, count.index)}"

 

  tags = {

    Terraform            = "true"

    Environment          = "${var.env}"

    CostCenter           = "${var.cost_center}"

    Name                 = "${var.name}-${count.index}"

  }

 

  volume_tags = {

    Terraform            = "true"

    Environment          = "${var.env}"

    CostCenter           = "${var.cost_center}"

    Name                 = "${var.name}-${count.index}"

  }

 

  root_block_device = {

    volume_type          = "gp2"

    volume_size          = "${var.root_volume_size}"

    delete_on_termination = true

  }

 

  lifecycle {

     create_before_destroy = false

  }

}

 

resource "aws_ebs_volume" "volume" {

  count             = "${var.attach_ebs_volume ? var.instance_count : 0}"

  availability_zone = "${element(aws_instance.instance.*.availability_zone, count.index)}"

  type              = "${var.ebs_volume_type}"

  iops              = "${var.ebs_volume_iops}"

  size              = "${var.ebs_volume_size}"

  encrypted         = true

  tags {

    Terraform       = "true"

    Environment     = "${var.env}"

    CostCenter      = "${var.cost_center}"

    Name            = "ebs-${var.name}-${count.index}"

  }

}

 

resource "aws_volume_attachment" "volumer-attach" {

  count          = "${var.attach_ebs_volume ? var.instance_count : 0}"

  device_name    = "/dev/sdf"

  volume_id      = "${element(aws_ebs_volume.volume.*.id, count.index)}"

  instance_id    = "${element(aws_instance.instance.*.id, count.index)}"

}

 

resource "aws_route53_record" "dns" {

  count      = "${var.instance_count}"

  zone_id    = "${var.dns_hosted_zone_id}"

  name       = "${var.name}-${count.index}.<yourdns>"

  type       = "A"

  ttl        = 300

  records    = ["${element(aws_instance.instance.*.private_ip, count.index)}"]

}

 

module "common_sg" {

  source           = "terraform-aws-modules/security-group/aws"

 

  name             = "common"

  description      = "Security group which applies to all servers"

  vpc_id           = "${var.vpc_id}"

 

  ingress_with_cidr_blocks = [

    {

      from_port   = 22

      to_port     = 22

      protocol    = "tcp"

      description = "SSH from VPC"

      cidr_blocks = "<put your ip range>/16"

    },

  ]

  egress_with_cidr_blocks = [

    {

      from_port   = 0

      to_port     = 65535

      protocol    = -1

      description = "All ports and protocols"

      cidr_blocks = "0.0.0.0/0"

    },

  ]

}

Then create another file called output.tf

output "instance_id" {

    value       = "${aws_instance.instance.*.id}"

    description = "Instance ID of the instances"

}

output "private_ip" {

    value       = "${aws_instance.instance.*.private_ip}"

    description = "Private IP address of the instances"

}

output "public_ip" {

    value       = "${aws_instance.instance.*.public_ip}"

    description = "Public IP address of the instances"

}

output "hostnames" {

    value       = "${aws_route53_record.dns.*.name}"

    description = "Hostnames of the instances"

}

 

Then create another file called vars.tf

variable "region" {

  description = "Region."

}

 

variable "instance_count" {

  description = "Number of instances to launch"

}

 

variable "instance_type" {

  description = "Type of instance to launch"

}

 

variable "key_name" {

  description = "Name of the keypair to use for the VM"

}

 

variable "security_group_ids" {

  description = "Security group ids to be associated with the VM"

}

 

variable "dns_hosted_zone_id" {

  description = "Route 53 DNS hosted zone id"

}

 

variable "vpc_id" {

  description = "VPC id"

}

 

variable "subnet_ids" {

  type        = "list"

  description = "Subnets in which the Instances should be spun up"

}

 

variable "env" {

  description = "Deployment environment. E.g. DEV, TEST, STAGE,PROD"

}

 

variable "cost_center" {

  description = "Cost center"

}

 

variable "name" {

  description = "Name for the instance"

}

 

variable "root_volume_size" {

  description = "Size of the root volume on the instance"

}

 

variable "attach_ebs_volume" {

  description = "Number of ebs volumes to attach to the instance"

}

 

variable "ebs_volume_type" {

  description = "Type of ebs volume to attach"

}

 

variable "ebs_volume_iops" {

  description = "IOPs value for io1 type of volumes"

}

 

variable "ebs_volume_size" {

  description = "Size of ebs volume to attach"

}

 

variable "associate_public_ip_address" {

  description = "Associate public to the instance"

  default     = false

}

This completes the infra code.

We will now create the method to invoke the above created infrastructure code. For this I created a folder called

infra-env/us-west-2/dev/kafka

. Under this I created a file called main.tf with the contents below.

locals {

    region           = "us-west-2"

    vpc_cidr         = "<yourCIDER/20"

    vpc_id           = "<your VPC>"

    security_group_id = "<your SG"

}

 

# -------- Kafka Servers ----------- #

module "kafka_server" {

    source                = "../../../../infra/ec2-instance"

    region                = "${local.region}"

    vpc_cidr              = "${local.vpc_cidr}"

    vpc_id                = "${local.vpc_id}"

    subnet_ids            = "<subnet1>,<subnet2>,<subnet3>"

    sg_id                 = "${local.security_group_id}"

    env                   = "<your ENV"

    profile               = “dev”

    root_volume_size      = 200

    name_prefix           = "dev-kafka"

    owner                 = "DevOps"

    instance_count        = "3"

    availability_zones    = "us-west-2a,us-west-2b,us-west-2c"

    ebs_volume_type       = "io1" # Can be "standard", "gp2", "io1", "sc1" or "st1" (Default: "standard").

    ebs_volume_iops       = 100

    ebs_volume_size       = 100 # GB

    key_name         = "<your key>"

    instance_type         = "m5.xlarge"

    ami                   = "ami-0bbe6b35405ecebdb"

}

Let’s see how to invoke creation of the vm’s is the same as earlier.

terraform plan -out kafka 

terraform apply <plan>

What does this do is create 3 instances with EBS volumes attached, with correct SG under the VPC mentioned in 3 availability zones.

Once the infrastructure is created we can check in the plan and state in git. If we want to create Mongo instances, we just create another folder for mongo and copy the Kafka main.tf and modify for Mongo or add mongo under the same main.tf.

I always believe in write code once and re-use, and this approach works well. This is just a starting point as this can be extended to VPC and other AWS components as well.

In next article I will show how to create a secure Kubernetes cluster.


To view or add a comment, sign in

More articles by Dinar Dalvi

  • Shut Down and Restart Kubernetes Cluster

    Requirement: You have used Kops to create a cluster. Say you have a Kubernetes cluster in your dev environment where…

    1 Comment

Others also viewed

Explore content categories