Launch application using terraform
1. Create the key and security group which allow the port 80.
2. Launch EC2 instance.
3. In this Ec2 instance use the key and security group which we have created in step 1.
4. Launch one Volume (EBS) and mount that volume into /var/www/html
5. Developer have uploded the code into github repo also the repo has some images.
6. Copy the github repo code into /var/www/html
7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
Here we go to our practical part
- Create the security group which allow the port 80.
resource "aws_security_group" "tasksg" {
name = "tasksg"
description = "Allow TLS inbound traffic"
vpc_id = "vpc-f89a8790"
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "tasksg"
}
}
- Launch EC2 instance.
resource "aws_instance" "taskinstance" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name= "mykey1111"
security_groups= [ "tasksg" ]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/hp/Downloads/mykey1111.pem")
host = aws_instance.taskinstance.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "taskinstance"
}
}
output "myoutaztask1" {
value = aws_instance.taskinstance.availability_zone
}
- In this Ec2 instance use the key and security group which we have created in step 1.
- Launch one Volume (EBS) and mount that volume into /var/www/html
#ebs create
resource "aws_ebs_volume" "taskebs" {
availability_zone = aws_instance.taskinstance.availability_zone
size = 1
tags = {
Name = "taskebs"
}
}
#attach the ebs
resource "aws_volume_attachment" "taskattach" {
device_name = "/dev/sdf"
volume_id = aws_ebs_volume.taskebs.id
instance_id = aws_instance.taskinstance.id
force_detach = true
}
resource "null_resource" "nullremote3" {
depends_on = [
aws_volume_attachment.taskattach,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/hp/Downloads/mykey1111.pem")
host = aws_instance.taskinstance.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/pawar789/multicloud.git /var/www/html/",
"sudo su << EOF",
"echo 'http://${aws_cloudfront_distribution.cloud_dist.domain_name}/${aws_s3_bucket_object.ap25bucket.key}' > /var/www/html/url.txt",
"EOF",
]
}
}
- Developer have uploded the code into github repo also the repo has some images.
- Copy the github repo code into /var/www/html
- Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
resource "aws_s3_bucket" "ap25bucket" {
bucket = "ap25bucket"
acl = "public-read"
force_destroy = true
tags = {
Name = "ap25bucket"
}
}
resource "aws_s3_bucket_object" "ap25bucket" {
depends_on = [
aws_s3_bucket.ap25bucket,
]
bucket = "ap25bucket"
key = "ap_image.jpg"
source = "C:/Users/hp/Downloads/vasily-koloda-8CqDvPuo_kI-unsplash.jpg"
etag = filemd5("C:/Users/hp/Downloads/vasily-koloda-8CqDvPuo_kI-unsplash.jpg")
acl = "public-read"
content_type = "image/jpg"
}
- Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.
resource "aws_cloudfront_distribution" "cloud_dist" {
origin {
domain_name = aws_s3_bucket.ap25bucket.bucket_regional_domain_name
origin_id = "S3-ap25bucket"
custom_origin_config {
http_port = 80
https_port = 80
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
enabled = true
default_root_object = "index.html"
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-ap25bucket"
forwarded_values {
query_string = true
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.taskinstance.public_ip
port = 22
private_key = file("C:/Users/hp/Downloads/mykey1111.pem")
}
provisioner "remote-exec" {
inline = [
"sudo su << EOF",
"echo 'http://${aws_cloudfront_distribution.cloud_dist.domain_name}/${aws_s3_bucket_object.ap25bucket.key}' > /var/www/html/url.txt",
"EOF",
]
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
output "cloudfront_domain_name" {
value = aws_cloudfront_distribution.cloud_dist.domain_name
}
- After this run terraform apply.
Output:
Github link for reps: