Hybrid-Multi-Cloud-Task2

Hybrid-Multi-Cloud-Task2

In this article we are going to discuss how to integrate aws ec2 , S3 , CloudFront , EBS services using Terraform code.

Pre-Requisite 

  • AWS IAM API keys (access key and secret key) for creating and deleting permissions for AWS resources. 
  • Terraform should be installed on the Local VM.

Problem statement

  • Create/launch Application using Terraform
  • 1. Create Security group which allow the port 80.
  • 2. Launch EC2 instance.
  • 3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.
  • 4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
  • 5. Developer have uploded the code into github repo also the repo has some images.
  • 6. Copy the github repo code into /var/www/html
  • 7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
  • 8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

AWS EBS

Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale.

_______________________________________________________

Solution

I have created a folder name as terrform and a file to write all the Terraform code in this file name as project6.tf .

No alt text provided for this image

Step-1 - create key pair

In the below screenshot i have written code to create a key pair . and before start creating key, the thing that we have to setup is

  • provider - A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. Alibaba Cloud, AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Cloud, DNSimple, Cloudflare). and here my provider is aws.
  • region - This is the AWS region. so i am giving my region as ap-south-1
  • profile - This is the AWS profile name as set in the shared credentials file. this contain AWS Access Key ID and AWS Secret Access Key.
No alt text provided for this image

key pair

  • For creating the key i am using a module key_pair . Terraform Module to automatically generating or importing an SSH public or private key file into aws.
  • And my key name is key123.

Step2 - Security Group

  • In the below screenshot i have written code to create a security group name as web-security .
  • ingress - specifies an inbound rule for a security group. An inbound rule permits instances to receive traffic from the specified IPv4 or IPv6 CIDR address range, or from the instances associated with the specified security group.
  • I have set 2 inbound rule
  • 1. for HTTP which work on port not 80 and tcp protocol
  • 2. for SSH which work on port no 22 and tcp protocol
  • 3. for NFS which work on port no 2049 and tcp protocol
No alt text provided for this image
No alt text provided for this image

Step-3 - Launch instance

  • Here i have written code to launch an instances using precious created key pair and security group .
  • And i am using a linux AMI image.
  • And i am installing some software httpd , php and git and starting the services of httpd using some linux basic command.
No alt text provided for this image

Step-4 - create efs volume

Amazon EFS provides the ease of use, scale, performance, and consistency needed for machine learning and big data analytics workloads. Data scientists can use EFS to create personalized environments, with home directories storing notebook files, training data, and model artifacts.

No alt text provided for this image

Step-5 - Mount efs volume

  • I am mounting previous created efs volume to /var/www/html folder.
No alt text provided for this image

Step-6 - deploy web content to /var/www/html

  • First i am doing SSH, and i am able to do ssh because in security group i have set a inbound rule for ssh which work on 22 port no.
  • I have already install git at the time of launching instances , so i can easily clone all the data of webpage from github to the folder /var/www/html .
No alt text provided for this image

Step-7- S3 bucket

  • Thing that we have to notice is bucket name. it should be unique in the region.
  • And to upload data in the s3 bucket, first i am cloning data from github in my local system using command
  • git clone https://github.com/Sulekha02112001/terra.g   image-folder.
  • And then i am uploading image terraform.png in s3 bucket.
No alt text provided for this image

Step-8 - Cloud Front

  • Purpose to create a cloud front is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
  • In this code only some countries are in whitelist so only they can access my images.
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Step-9 - Deploy Cloud front url in website

  • Here, in the output section i am printing the cloud front url.
  • And deploying this url in the webpage sulekha.html that is in my instances in the folder /var/www/html by SSH.
No alt text provided for this image

10. access website

As soon as the entire infrastructure will be created, chrome will start with the URL of my webpage that is ip of my instance.

No alt text provided for this image

code is finished.

Now i am going to run this code using some command

terraform init

The terraform init command is used to initialize a working directory containing Terraform configuration files

terraform validate

To check the syntax of code is right or not

terraform plan

The terraform plan command is used to create an execution plan

terraform apply -auto-approve

To run the code

No alt text provided for this image
No alt text provided for this image

As soon as all the setup is ready chrome will start and it launch your webpage automatically.

No alt text provided for this image

Output of all the setup that we have created using terraform code

1.key name:- key123

No alt text provided for this image

2.security group name:- web-security

No alt text provided for this image

3. instance name:- instance1

No alt text provided for this image

4. efs volume name:- myefs

No alt text provided for this image

5. efs mount

No alt text provided for this image

6. s3 bucket name:- sulekha123

No alt text provided for this image

7. cloud front

No alt text provided for this image

and here we can see all the file and code that we have deployed in /var/www/html folder from github.

No alt text provided for this image
No alt text provided for this image

Everything is done !.....................

Thank you for reading!..................











To view or add a comment, sign in

More articles by Km Sulekha

  • Docker GUI Task26

    This article describe how we can launch GUI software on Docker. Task Description *GUI container* on the *Docker* Launch…

  • HotStar Case Study Task2

    This article will describe the journey of Hotstar’s infrastructure from EC2 to Kubernetes. The journey about why, what,…

    1 Comment
  • Case Study of AWS SQS

    In this article I am going to share what is Amazon SQS and industries use case of Amazon SQS. Amazon SQS Amazon Simple…

  • ARTH Task18

    In this article we are going to explain how to we can configure WordPress on ec2 instances and connect this with Amazon…

  • Arth2020 Task17

    In this article we are going to create chat program using python socket programming and Threading. Task Description…

    2 Comments
  • Create a setup so that you can ping google but not able to ping Facebook from same system Task13

    This article is to explain how we can create an interesting networking setup in which only we can reach to google but…

  • How big MNC's like Google , Amazon etc stores, manages and manipulate Thousands of Terabytpes of data with High Speed and High Efficiency Task-1

    In a world where competition is intense, users will simply dump you, if your app slows down or freezes. So your…

  • Industries use case of AKS ✨

    Kubernetes is by far the most popular container orchestration tool, yet the complexities of managing the tool have led…

  • Ansible-Tower

    In this 3 hours of session we learn practical demo how industries are using Ansible-Tower and how it gives lots of…

  • Arth2020 task12

    This article is to explain how to configure a highly automated setup for Reverse proxy i.e Haproxy and update it's…

    5 Comments

Others also viewed

Explore content categories