Launch application using EFS by Terraform

Launch application using EFS by Terraform

Task 2 : Have to create/launch Application using Terraform

1. Create the key and security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EFS) and mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

Here we are go to our practical part

  • Create the key and security group which allow the port 80.
resource "aws_security_group" "allow_ports" {
  name        = "allow_ports"
  description = "Allow  inbound traffic"
 


  ingress {
    description = "tcp from VPC"
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
 
ingress {
    description = "ssh from VPC"
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
 
#NFS port (2049) 

ingress {
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "allow_ports"
  }
}
  • Launch EC2 instance, In this Ec2 instance use the key and security group which we have created in step 1.
resource "aws_instance" "taskinstance" {
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = "mykey1111"
  security_groups = ["${aws_security_group.allow_ports.name}"]
 
connection {
    type     = "ssh"
    user     = "ec2-user"
     private_key = file("C:/Users/hp/Downloads/mykey1111.pem")
     host     = aws_instance.taskinstance.public_ip
  }


   provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd php git -y",
      "sudo systemctl restart httpd",
      "sudo systemctl enable httpd",
    ]
  }
  tags = {
    Name = "Webserver"
  }
}
  • Launch one Volume (EFS) and mount that volume into /var/www/html.
resource "null_resource" "mounting" {
depends_on = [ aws_efs_mount_target.myefs_storage, ]


connection  { 
     type = "ssh"
     user = "ec2-user"
     private_key = file("C:/Users/hp/Downloads/mykey1111.pem")
     host     = aws_instance.taskinstance.public_ip
    }

provisioner "remote-exec" {
    inline = [ 
        "sudo yum install httpd php git -y",
         "sudo systemctl restart httpd",
         "sudo systemctl enable httpd",
         "sudo mkfs.ext4  /dev/xvdh",
         " sudo mount /dev/xvdh  /var/www/html",
         " sudo rm -rf /var/www/html/*",
          "sudo git clone https://github.com/pawar789/multicloud.git /var/www/html/",
        "sudo su << EOF",
      "echo 'http://${aws_cloudfront_distribution.cloud_dist.domain_name}/${aws_s3_bucket_object.ap25bucket.key}' > /var/www/html/url.txt",
      "EOF",
       "sudo yum install nfs-utils -y"
          ]
       }
   }
  • Developer have uploded the code into github repo also the repo has some images.
No alt text provided for this image
  • Copy the github repo code into /var/www/html, already done in step 3.
  • Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
resource "aws_s3_bucket" "ap25bucket" {
  bucket = "ap25bucket"
  acl    = "public-read"
  force_destroy = true
  tags = {
    Name = "ap25bucket"
  }
}


resource "aws_s3_bucket_object" "ap25bucket" {
depends_on = [
    aws_s3_bucket.ap25bucket,
  ]
  bucket = "ap25bucket"
  key    = "ap_image.jpg"
  source = "C:/Users/hp/Downloads/vasily-koloda-8CqDvPuo_kI-unsplash.jpg"
  etag = filemd5("C:/Users/hp/Downloads/vasily-koloda-8CqDvPuo_kI-unsplash.jpg")
  acl = "public-read"
  content_type = "image/jpg"


}



  • Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.
resource "aws_cloudfront_distribution" "cloud_dist" {
  origin {
    domain_name = aws_s3_bucket.ap25bucket.bucket_regional_domain_name
    origin_id   = "S3-ap25bucket"


    custom_origin_config {
            http_port = 80
            https_port = 80
            origin_protocol_policy = "match-viewer"
            origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
        }
  }


  enabled             = true
  default_root_object = "index.html"


  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id =  "S3-ap25bucket"


    forwarded_values {
      query_string = true


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
    min_ttl                = 0
    default_ttl            = 3600
    max_ttl                = 86400
  }


  restrictions {
    geo_restriction {
      restriction_type = "none"
     
    }
  }


  
 
  viewer_certificate {
    cloudfront_default_certificate = true
  }
}




output "cloudfront_domain_name" {
       value = aws_cloudfront_distribution.cloud_dist.domain_name
}



  • After this run terraform apply.

Output:

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image






To view or add a comment, sign in

More articles by Aman Pawar

Others also viewed

Explore content categories