Task 2 ll Updated ll using EFS instead of EBS service on the AWS launch Application using Terraform
This is an updated implementation of the previous infrastructure on AWS Cloud where we had used an EBS volume , but now we will be using the EFS (Elastic File System) volume .
Do checkout the version 1 of this infrastructure in which AWS Elastic Block Storage was used .
Storage as a service(saas)
There are three types of storage
- Object storage
- Block storage
- File system storage
AWS Cloud provides these storage under its service known as Storage as a Service (SAAS) . These are the following services:-
Block Storage : Elastic Block Storage (EBS).
EBS can be connected to only one instance at a time and also we need to make partitions of that storage to connect to and instance . EBS is Local File System storage per Instance
Object Storage : Simple Storage Service (S3).
It is Permanent , non-editable data storage. S3 is a kind of a storage in which we cannot edit the data we can only read . If any updation is needed then it is not possible with S3
File System : Elastic File System (EFS).
Elastic File system storage can be mounted onto multiple operating systems . The EFS is mounted over a folder inside the instance in which the data is present . Whenever any data is to be edited , it is done only in that folder , which automatically gets reflected in the EFS storage .The protocol used in this type of storage transfer is called NFS (Network File System) . Here , the instance folder is the client side and the EFS storage is the server side . The data is maintained through a network . EFS is Centralized File System storage.
Now let's discuss steps to be followed in this particular task!
- The content to be used for the website is to be uploaded on the GitHub .(Here i am uploading a single image as the website content.)
- Create an S3 bucket on AWS .
- Pull the content from the GitHub and upload it into the S3 Bucket
- Create an AWS CloudFront to access the uploaded content from S3 bucket . Share the CloudFront link with the Developer who will write the webpage code .Developer writes the code and pushes it onto the GitHub .
- Create a Key Pair .
- Create a security group with ingress rule of ssh for login , httpd for webpage access and port 2049 for EFS ingress
- Launch an AWS instance using the above generated Key pair and security group .
- Create an EFS Storage .
- Mount the EFS onto the target VPC subnet where the instance is launched .
- Configure the Instance by mounting the efs volume onto the folder /var/www/html where the webpage code is to be saved to deploy .
Implementation of these steps will be done with the help of Terraform code.
STEP - 1 GITHUB UPLOAD
follow the same steps as we did in the previous task till the formation of cloud front.
STEP-2 CREATE S3 BUCKET
//Connecting to account through aws provider
provider "aws"{
region = "ap-south-1"
profile = "shreya"
}
//Creating S3 bucket
resource "aws_s3_bucket" "bucket" {
bucket = "shreya172"
acl = "private"
force_destroy = "true"
versioning {
enabled = true
}
}
STEP- 3 PULL CONTENT FROM GITHUB AND UPLOAD INTO THE S3 BUCKET
//Downloading content from Github
resource "null_resource" "local-1" {
depends_on = [aws_s3_bucket.bucket,]
provisioner "local-exec" {
command = "git clone https://github.com/shreyaverma8191/task2.git"
}
}
// Uploading file to bucket
resource "aws_s3_bucket_object" "file_upload" {
depends_on = [aws_s3_bucket.bucket , null_resource.local-1]
bucket = aws_s3_bucket.bucket.id
key = "TASK2.png"
source = "task2/TASk2.png"
acl = "public-read"
}
STEP-4 CREATING CLOUDFRONT DISTRIBUTION FOR S3 AND GETTING THE CLOUDFRONT LINK TO ACCESS THE S3 BUCKET
// Creating Cloudfront Distribution
resource "aws_cloudfront_distribution" "distribution" {
depends_on = [aws_s3_bucket.bucket , null_resource.local-1 ]
origin {
domain_name = aws_s3_bucket.bucket.bucket_regional_domain_name
origin_id = "S3-shreya17-id"
custom_origin_config {
http_port = 80
https_port = 80
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}
enabled = true
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-shreya17-id"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
output "domain-name" {
value = aws_cloudfront_distribution.distribution.domain_name
}
Let's run our first code for infrastructure!!
- run cmd terraform init
- then run terraform validate
- terraform apply -auto-approve
bucket created!
Following is a simple code (index.html) i have made for the webpage with content url.
index.html <html> <body bgcolor = "light pink"> <H1><b><i><u> Welcome to Hybrid-Multi-Cloud Training cum Internship </b></i></u></H1> <br> <H3><i>Here is the representation of whole process of Task 2 . Have a Look !!!</i></H3> <br> <img src ="https://d1zdc9j0komz9n.cloudfront.net/zzz.png" width="800" height="500" align = "center"> </body> </html>
Now the other half section of the task
STEP-5 CREATE KEY PAIR
provider "aws" {
region = "ap-south-1"
profile = "shreya"
}
//Generating Key pair
resource "tls_private_key" "key-pair" {
algorithm = "RSA"
}
resource "aws_key_pair" "key" {
depends_on = [ tls_private_key.key-pair ,]
key_name = "shreyaT2"
public_key = tls_private_key.key-pair.public_key_openssh
}
STEP-6 CREATE SECURITY GROUP
//Generating Security Group
resource "aws_security_group" "task-security" {
depends_on = [aws_key_pair.key,]
name = "task-security2"
description = "SSH ,HTTP and NFS"
ingress {
description = "SSH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
ingress {
description = "NFS"
from_port = 2049
to_port = 2049
protocol = "tcp"
cidr_blocks = [ "0.0.0.0/0" ]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "task-security2"
}
}
STEP-7 LAUNCH AN INSTANCE
// Launching The Instance
resource "aws_instance" "task2" {
depends_on = [aws_security_group.task-security,]
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = aws_key_pair.key.key_name
security_groups = [ "task-security" ]
// Connecting to the instance
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.key-pair.private_key_pem
host = aws_instance.task2.public_ip
}
// Installing the requirements
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "task-2"
}
}
STEP-8 CREATE AN EFS STORAGE
// Launching a EFS Storage
resource "aws_efs_file_system" "nfs" {
depends_on = [ aws_security_group.task-security , aws_instance.task2 ]
creation_token = "nfs"
tags = {
Name = "nfs"
}
}
STEP-9 MOUNT EFS ON THE RESPECTIVE VPC SUBNET OF INSTANCE
// Mounting the EFS volume onto the VPC's Subnet
resource "aws_efs_mount_target" "target" {
depends_on = [ aws_efs_file_system.nfs,]
file_system_id = aws_efs_file_system.nfs.id
subnet_id = aws_instance.task2.subnet_id
security_groups = ["${aws_security_group.task-security.id}"]
}
STEP-10 CONFIGURE THE INSTANCE AND MOUNT EFS TO THE FOLDER AND PULL THE WEBPAGE CODE INTO THAT FOLDER AND THEN SEE THE DEPLOYED WEBPAGE
output "task-instance-ip" {
value = aws_instance.task2.public_ip
}
//Connect to instance again
resource "null_resource" "remote-connect" {
depends_on = [ aws_efs_mount_target.target,]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.key-pair.private_key_pem
host = aws_instance.task2.public_ip
}
// Mounting the EFS on the folder and pulling the code from github
provisioner "remote-exec" {
inline = [
"sudo echo ${aws_efs_file_system.nfs.dns_name}:/var/www/html efs defaults,_netdev 0 0 >> sudo /etc/fstab",
"sudo mount ${aws_efs_file_system.nfs.dns_name}:/ /var/www/html",
"sudo git clone https://github.com/shreyaverma8191/task2.git /var/www/html/"
]
}
}
//Connect to the webserver to see the website
resource "null_resource" "webpage" {
depends_on = [null_resource.remote-connect,]
provisioner "local-exec" {
command = "start chrome ${aws_instance.task2.public_ip}"
}
}
Now again follow the three commands run the code
- terraform init
- terraform validate
- terraform apply -auto-approve
Our infrastructure is deployed on ip 15.207.88.23 . Website can be accessed from anywhere and the webpage will be displayed as follows.
whole infrastructure is successfully deployed.!
Incredible!!