Hybrid Multi Cloud Computing Task-1

Hybrid Multi Cloud Computing Task-1

THIS IS MY FIRST TASK OF HYBRID MULTI CLOUD COMPUTING UNDER THE MENTORSHIP OF MR.VIMAL DAGA SIR

Hello Everyone!!! Hope uh all are doing well.

Amazon Web Services (AWS) is a subsidiary of Amazon that provides on-demand Cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. In today's era, many companies are willing to migrate their business to cloud for secure storage and for this AWS is considered the most trustworthy.

AWS provides a variety of services to its clients, and a creation of a strong infrastructure involving these services can lead to a smooth deployment of applications, across the Globe. The following project showcases such an infrastructure, involving AWS cloud services, built and versioned through Terraform.

So first we Understand, What is Terraform ?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

But What I understand about terraform is it is a powerful tool by which you can manage any cloud infrastructure whether it be private or public using just single program.

You can learn more about terraform here

The Task Details are as follows:

No alt text provided for this image

Best Part of the Above task is we create every infrastructure required using just a single program instead of memorizing all the complex commands of each cloud.

  1. Creating Key and Security Group
//security group allowing SSH, HTTP and HTTPS protocols


resource "aws_security_group" "task" {
  name        = "taskfw"
  description = "Allow SSH and Port 80"
  vpc_id      = aws_vpc.taskvpc.id


  ingress {
    from_port   = 80
    to_port     =  80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    from_port   = 443
    to_port     =  443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }


  ingress {
    from_port   = 22
    to_port     =  22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"] 
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name = "Task1FirewallT"
  }
depends_on = [
	aws_vpc.taskvpc
]
}


//generating a private_key


resource "tls_private_key" "mykey" {
  algorithm = "RSA"
  depends_on = [
	aws_security_group.task
]
}


//creating an aws_key_pair


resource "aws_key_pair" "mykey" {
  key_name   = "TaskKey"
  public_key = tls_private_key.mykey.public_key_openssh 
  depends_on = [
    tls_private_key.mykey
  ]
}

No alt text provided for this image
No alt text provided for this image

2. Launch EC2 Instance and attaching keay-pair and security group

resource "aws_instance" "webserver" {
  ami           = "ami-0447a12f28fddb066"
  instance_type = "t2.micro"
  key_name = aws_key_pair.mykey.key_name
  vpc_security_group_ids = ["${aws_security_group.task.id}"]
  subnet_id = aws_subnet.tasksub.id 


  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.mykey.private_key_pem
    host     = aws_instance.webserver.public_ip
  }


  provisioner "remote-exec" {
    inline = [
      "sudo yum install httpd  php git -y",
      "sudo systemctl start httpd",
      "sudo systemctl enable httpd",
    ]
  }


  tags = {
    Name = "mywebos"
  }
  depends_on = [
	aws_key_pair.mykey
]
}


//output public ip


output "publicip" {
  value = aws_instance.webserver.public_ip
}

No alt text provided for this image

3. Now Creating EBS Volume and attaching to EC2 Instance and Copying Github repo to /var/www/html folder of ec2 instance

resource "aws_volume_attachment" "ebsattach" {
  device_name = "/dev/sdf"
  volume_id   = "${aws_ebs_volume.myebs.id}"
  instance_id = "${aws_instance.webserver.id}"
  force_detach = true


  connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.mykey.private_key_pem
    host     = aws_instance.webserver.public_ip
  }


provisioner "remote-exec" {
    inline = [
      "sudo mkfs.ext4  /dev/xvdf",
      "sudo mount  /dev/xvdf  /var/www/html",
      "sudo rm -rf /var/www/html/*",
      "sudo git clone https://github.com/akshayauti/terraform.git /var/www/html/"
    ]
  }


  depends_on = [
	aws_instance.webserver
]
}
No alt text provided for this image
No alt text provided for this image

4. Creating S3 bucket and copying my image into s3 bucket

resource "aws_s3_bucket" "taskbucket" {
  bucket = "taskenvbucket123"
  acl    = "public-read"


  tags = {
    Name        = "My Task bucket"
  }
}


//uploading image to s3 bucket


resource "aws_s3_bucket_object" "object" {
  bucket = aws_s3_bucket.taskbucket.bucket
  key    = "akshay.jpg"
  source = "E:/Photos/Pictures/Ak.jpg"
  content_type = "image/jpg"
  acl = "public-read"
  depends_on = [
	aws_s3_bucket.taskbucket
]
}
No alt text provided for this image

5. And Finally creating Cloudfront url of my image from S3 Bucket and add the same in index.php file

locals {
  s3_origin_id = "new_s3_task"
}


resource "aws_cloudfront_distribution" "task_distribution" {
  origin {
    domain_name = "${aws_s3_bucket.taskbucket.bucket_regional_domain_name}"
    origin_id   = "${local.s3_origin_id}"
  }


  enabled             = true
  is_ipv6_enabled     = true


  default_cache_behavior {
    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {
      query_string = false


      cookies {
        forward = "none"
      }
    }


    viewer_protocol_policy = "allow-all"
  }


  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }


  tags = {
    Environment = "task"
  }


  viewer_certificate {
    cloudfront_default_certificate = true
  }
  depends_on = [
	 aws_s3_bucket_object.object
]
}



resource "null_resource" "image"  {
depends_on = [
    aws_instance.webserver, aws_cloudfront_distribution.task_distribution, aws_volume_attachment.ebsattach
  ]
connection {
    type     = "ssh"
    user     = "ec2-user"
    private_key = tls_private_key.mykey.private_key_pem
    host     = aws_instance.webserver.public_ip
  }
  provisioner "remote-exec" {
    inline = [
	"echo '<img src='https://${aws_cloudfront_distribution.task_distribution.domain_name}/akshay.jpg' width='600' height='400'>'  | sudo tee -a /var/www/html/index.php"
]
  
}
}
No alt text provided for this image

And Finally if Everything goes well we open hosted application in our local machine using public IP of Instance Image

resource "null_resource" "chrome"  {




depends_on = [
    null_resource.image
  ]


	provisioner "local-exec" {
	    command = "start chrome ${aws_instance.webserver.public_ip}"
  	}
}
output "cd__dns"{
	value = aws_cloudfront_distribution.task_distribution.domain_name
}

No alt text provided for this image

Thus, the entire Cloud infrastructure has been configured.

Yehhhh.. Everything is Applied Correctly and we hosted the web appliction on aws infrastructure.😊

No alt text provided for this image

Compilation Process:

Here compilation process is very much simple unike other languages.

We initialize terraform just by,

terraform init

After that to apply the all resources you need the following command,

terraform start --auto-approve

this command is so powerful, just one click and whole infrastructure is ready for you.

U can Destroy whole created Infrastructure using just one command

terraform destroy

U can find the entire code in my github repo : https://github.com/akshayauti/terraform.git

I hope the above article proved to be useful to the readers. In case of any queries or suggestions, DM me or comment below.

Thank You for reading.

To view or add a comment, sign in

Others also viewed

Explore content categories