Hybrid-Multi-Cloud-Task2
In this article we are going to discuss how to integrate aws ec2 , S3 , CloudFront , EBS services using Terraform code.
Pre-Requisite
- AWS IAM API keys (access key and secret key) for creating and deleting permissions for AWS resources.
- Terraform should be installed on the Local VM.
Problem statement
- Create/launch Application using Terraform
- 1. Create Security group which allow the port 80.
- 2. Launch EC2 instance.
- 3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.
- 4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
- 5. Developer have uploded the code into github repo also the repo has some images.
- 6. Copy the github repo code into /var/www/html
- 7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
- 8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.
AWS EBS
Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale.
_______________________________________________________
Solution
I have created a folder name as terrform and a file to write all the Terraform code in this file name as project6.tf .
Step-1 - create key pair
In the below screenshot i have written code to create a key pair . and before start creating key, the thing that we have to setup is
- provider - A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. Alibaba Cloud, AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Cloud, DNSimple, Cloudflare). and here my provider is aws.
- region - This is the AWS region. so i am giving my region as ap-south-1
- profile - This is the AWS profile name as set in the shared credentials file. this contain AWS Access Key ID and AWS Secret Access Key.
key pair
- For creating the key i am using a module key_pair . Terraform Module to automatically generating or importing an SSH public or private key file into aws.
- And my key name is key123.
Step2 - Security Group
- In the below screenshot i have written code to create a security group name as web-security .
- ingress - specifies an inbound rule for a security group. An inbound rule permits instances to receive traffic from the specified IPv4 or IPv6 CIDR address range, or from the instances associated with the specified security group.
- I have set 2 inbound rule
- 1. for HTTP which work on port not 80 and tcp protocol
- 2. for SSH which work on port no 22 and tcp protocol
- 3. for NFS which work on port no 2049 and tcp protocol
Step-3 - Launch instance
- Here i have written code to launch an instances using precious created key pair and security group .
- And i am using a linux AMI image.
- And i am installing some software httpd , php and git and starting the services of httpd using some linux basic command.
Step-4 - create efs volume
Amazon EFS provides the ease of use, scale, performance, and consistency needed for machine learning and big data analytics workloads. Data scientists can use EFS to create personalized environments, with home directories storing notebook files, training data, and model artifacts.
Step-5 - Mount efs volume
- I am mounting previous created efs volume to /var/www/html folder.
Step-6 - deploy web content to /var/www/html
- First i am doing SSH, and i am able to do ssh because in security group i have set a inbound rule for ssh which work on 22 port no.
- I have already install git at the time of launching instances , so i can easily clone all the data of webpage from github to the folder /var/www/html .
Step-7- S3 bucket
- Thing that we have to notice is bucket name. it should be unique in the region.
- And to upload data in the s3 bucket, first i am cloning data from github in my local system using command
- git clone https://github.com/Sulekha02112001/terra.g image-folder.
- And then i am uploading image terraform.png in s3 bucket.
Step-8 - Cloud Front
- Purpose to create a cloud front is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
- In this code only some countries are in whitelist so only they can access my images.
Step-9 - Deploy Cloud front url in website
- Here, in the output section i am printing the cloud front url.
- And deploying this url in the webpage sulekha.html that is in my instances in the folder /var/www/html by SSH.
10. access website
As soon as the entire infrastructure will be created, chrome will start with the URL of my webpage that is ip of my instance.
code is finished.
Now i am going to run this code using some command
terraform init
The terraform init command is used to initialize a working directory containing Terraform configuration files
terraform validate
To check the syntax of code is right or not
terraform plan
The terraform plan command is used to create an execution plan
terraform apply -auto-approve
To run the code
As soon as all the setup is ready chrome will start and it launch your webpage automatically.
Output of all the setup that we have created using terraform code
1.key name:- key123
2.security group name:- web-security
3. instance name:- instance1
4. efs volume name:- myefs
5. efs mount
6. s3 bucket name:- sulekha123
7. cloud front
and here we can see all the file and code that we have deployed in /var/www/html folder from github.
Congratulations sulekha