Creating Infrastructure in AWS cloud using Terraform Hashicorp Language.

Creating Infrastructure in AWS cloud using Terraform Hashicorp Language.

Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a data center infrastructure using a high-level configuration language known as "Hashicorp Configuration Language", optionally JSON. In this project, I used this tool to create a complete infrastructure in the AWS cloud.

My Github repository for this project: https://github.com/Rishi964/cloud_task1.git

Prerequisites: Terraform client software installed in the base OS. Then create one profile, "Rishabh" which is a secured user of the "AWS" provider.

provider "aws" {
  region  = "ap-south-1"
  profile = "Rishabh"
}


Step 1: Create a key and security group which allows the port 80. I used "tls_private_key" and "aws_key_pair" resources simultaneously to generate private and public keys. And "aws_security_group" resource to create a custom security group that allows port 22 (for SSH connection) and port 80 (for HTTP connection).

// create private and public keys

resource "tls_private_key" "task1_private_key" {
  algorithm   = "RSA"
  rsa_bits = 4096
}


resource "aws_key_pair" "task1_public_key" {
  key_name   = "task1_public_key"
  public_key = tls_private_key.task1_private_key.public_key_openssh
}


// create security-group 

resource "aws_security_group" "task1_SG" {
  name = "task1_SG"
  description = "Allow TCP inbound traffic"

  ingress {
    description = "SSH port from VPC"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    description = "HTTP port from VPC"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  
  tags = {
    Name = "task1_SG"
  }
}

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


Step 2: Launch EC2 instance, use the key and security group that is created in step 1. To launch an instance, I used the "aws_instance" resource of EC2 service.

// create ebs-instance

resource "aws_instance" "task1_os" {
  ami             = "ami-0447a12f28fddb066"
  instance_type   = "t2.micro"
  security_groups = [ "task1_SG" ]  
  key_name        = aws_key_pair.task1_public_key.key_name
  
  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.task1_private_key.private_key_pem
    host     = aws_instance.task1_os.public_ip
  }

  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd"
    ]
  }

  tags = {
    Name = "task1_os"
  }
}

The AMI used is the Amazon Linux version 2 with instance type "t2.micro" which provides 1 CPU and 1 GiB RAM. Then I used the security group "task1_SG" and private key "private_key_pem" that we created in the last step. Also, I used the "remote-exec" provisioner for the SSH connection to the remote instance. Then install httpd, php, and git in the ec2 instance. Here we can see the public and private IP of the instance.

No alt text provided for this image


Step 3: Launch one volume (EBS) and mount that volume into /var/www/html. To create EBS volume, I used the "aws_ebs_volume" resource to create volume in the same availability zone. Then attach this to the instance using "aws_volume_attachment".

// create EBS volume

resource "aws_ebs_volume" "task1_ebs" {
  availability_zone = aws_instance.task1_os.availability_zone
  size  = 1

  tags = {
    Name = "task1_ebs"
  }
}


// volume attachment to instance

resource "aws_volume_attachment" "task1_ebs_attach" {
  depends_on = [ aws_ebs_volume.task1_ebs, ]
 
  device_name = "/dev/sdh"
  volume_id   = aws_ebs_volume.task1_ebs.id
  instance_id = aws_instance.task1_os.id
  force_detach = true
}



// mount volume to the folder

resource "null_resource" "remote-exec1" {
  
  depends_on = [
    aws_volume_attachment.task1_ebs_attach,
  ]

  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.task1_private_key.private_key_pem
    host     = aws_instance.task1_os.public_ip
  }

  provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdh",
      "sudo mount  /dev/xvdh  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/Rishi964/cloud_task1.git /var/www/html/"
    ]
  }
}

Then to mount this volume to the folder, there is a need to connect to the instance, format disk and then mount it. So, I used the SSH remote connection to log in the instance, format disk and then mount it to /var/www/html folder. Then clone git repository inside this folder.

No alt text provided for this image


Step 4: Create a snapshot of EBS volume. In order to have a backup, we create a snapshot of EBS volume. I used the resource "aws_ebs_snapshot" to create one snapshot.

// create ebs-snapshot

resource "aws_ebs_snapshot" "snapshot" {
  depends_on = [null_resource.remote-exec1] 
  volume_id  = aws_ebs_volume.task1_ebs.id

  tags = {
    Name = "ebs_snapshot"
  }
}

No alt text provided for this image


Step 5: Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable. To create an S3 bucket, I used "aws_s3_bucket" in the region "ap-south-1" for fast accessibility in India. Another resource "aws_s3_bucket_object" is also required to put data in the bucket. Here I put an image "image.jpg" inside the S3 bucket with "public-read" accessibility.

// create S3 bucket

resource "aws_s3_bucket" "task1-cloud-bucket" {  
  bucket = "task1-cloud-bucket"
  acl    = "public-read"
  region = "ap-south-1"

  tags = {
    Name  = "task1-cloud-bucket"
  }
}


resource "aws_s3_bucket_object" "object" {
  bucket = aws_s3_bucket.task1-cloud-bucket.id
  key    = "image.jpg"
  source = "C:/Users/Risha/Downloads/image.jpg"
  content_type = "image/jpg"
  acl = "public-read"
}

No alt text provided for this image
No alt text provided for this image

Also, there is a need to update the Bucket Policy and IAM Policy. So, now I changed the policy so that it is accessible from outside. To do this I used, "aws_s3_bucket_policy" and "aws_iam_policy_document" and changed the policies as desired.

data "aws_iam_policy_document" "s3_policy" {
  
  statement {
    actions   = ["s3:GetObject"]
    resources = ["${aws_s3_bucket.task1-cloud-bucket.arn}/*"]

    principals {
      type        = "AWS"
      identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
    }
  }

  statement {
    actions   = ["s3:ListBucket"]
    resources = ["${aws_s3_bucket.task1-cloud-bucket.arn}"]

    principals {
      type        = "AWS"
      identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
    }
  }
}


resource "aws_s3_bucket_policy" "bucket-policy" {
  bucket = aws_s3_bucket.task1-cloud-bucket.id
  policy = data.aws_iam_policy_document.s3_policy.json

}

No alt text provided for this image
No alt text provided for this image


Step 6: Create a Cloudfront using an S3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html. To create CloudFront, many permissions and policies are needed. Also, CloudFront has its own access identity called Origin Access Identity. So, we need 2 resources to create a CloudFront "aws_cloudfront_origin_access_identity" & "aws_cloudfront_distribution". Also, S3-origin-id is also required that is obtained from s3_origin_id in "locals".

// create CloudFront


locals {
  s3_origin_id = "myS3Origin"
}

resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
  comment = "this is the origin access identity"
}


resource "aws_cloudfront_distribution" "s3_distribution" {
  
  origin {
    domain_name = aws_s3_bucket.task1-cloud-bucket.bucket_regional_domain_name
    origin_id   = local.s3_origin_id

    s3_origin_config {
      origin_access_identity =    aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_ide ntity_path
    }
  }

  enabled             = true
  is_ipv6_enabled     = true

  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "redirect-to-https"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  viewer_certificate {
    cloudfront_default_certificate = true
  }
  

  connection {
    type = "ssh"
    user = "ec2-user"
    private_key = tls_private_key.task1_private_key.private_key_pem
    host     = aws_instance.task1_os.public_ip
  }
  
  provisioner "remote-exec" {
    inline = [
      "sudo sed -i 's/CF_domain/${aws_cloudfront_distribution.s3_distribution.domain_name}/' /var/www/html/index.html",
      "sudo systemctl restart httpd"
    ]
  }
 
  depends_on = [ aws_s3_bucket.task1-cloud-bucket, ]
}

No alt text provided for this image
No alt text provided for this image

Now, we use the CloudFront URL: "domain_name" to access the image in the S3 bucket. At last, we

run command "chrome public_IP" in CMD to see our webpage.

// Final execution

resource "null_resource" "local-exec1"  {
  depends_on = [
    aws_cloudfront_distribution.s3_distribution,
  ]

  provisioner "local-exec" {
    command = "chrome ${aws_instance.task1_os.public_ip}"
  }
}

output "domain" {
  value = aws_cloudfront_distribution.s3_distribution.domain_name
}
No alt text provided for this image

Now, our task is completed. Now, we have public IP of the instance to access our webpage.

Thanks for reading.

To view or add a comment, sign in

More articles by Rishabh Khandelwal

  • Integration of Ansible with Docker

    ❗ Ansible Task 1❗ Task: Write an ansible playbook to configure webserver using docker as follows: 🔹 Configure Docker…

    8 Comments
  • ReplicaSet by Terraform in Kubernetes

    Kubernetes: Kubernetes (K8s) is an open-source automating deployment, scaling, and management of containerized…

  • Integration of ML with DevOps.

    Machine Learning is the trend in the present world and inside machine learning, there is Deep Learning there we use the…

    2 Comments
  • Face Recognition with Transfer Learning

    Transfer Learning: Transfer learning is a machine learning model developed for a task that is reused as a starting…

Others also viewed

Explore content categories