Task-1 || AWS with Terraform
Amazon Web Services is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis.
Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a datacenter infrastructure using a high-level configuration language known as Hashicorp Configuration Language(HCL) or optionally JSON. HCL is a declarative language.
OBJECTIVE:-
->Have to create/launch Application using Terraform.
1. Create the key and security group which allow the port 80.
2. Launch EC2 instance.
3. In this Ec2 instance use the key and security group which we have created in step 1.
4. Launch one Volume (EBS) and mount that volume into /var/www/html
5. Developer have uploded the code into github repo also the repo has some images.
6. Copy the github repo code into /var/www/html
7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.
8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html
SOLUTION:-
The following task can be done with different approaches and styles. We will build a terraform code for the above task.
Hashicorp Configuration Language(HCL) , the language that terraform uses is not a interpreted language. It will read the code randomly, to purge this problem we will be using "depends_on " function.
--> Creating a key pair
Key pair is combination of two keys i.e a public key and a private key. We need a key pair to connect to the instances.
provider "aws" {
region = "ap-south-1"
profile = "Punit"
}
resource "aws_key_pair" "keygen" {
key_name = "deployer-key"
public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQD3F6tyPEFEzV0LX3X8BsXdMsQz1x2cEikKDEY0aIj41qgxMCP/iteneqXSIFZBp5vizPvaoIR3Um9xK7PGoW8giupGn+EPuxIA4cDM4vzOqOkiMPhz5XK0whEjkVzTo4+S0puvDZuwIsdiW9mxhJc7tgBNL0cYlWSYVkz4G/fslNfRPW5mYAM49f4fhtxPb5ok4Q2Lg9dPKVHO/Bgeu5woMc7RY0p1ej6D4CKFE6lymSDJpW0YHX/wqE9+cfEauh7xZcG0q9t2ta6F6fmX0agvpFyZo8aFbXeUBr7osSCJNgvavWbM/06niWrOvYX2xwWdhXmXSrbX8ZbabVohBK41 email@example.com"
--> Creating a security group
A security group will act just like a virtual firewall. It will controll the incoming and outgoing traffic for your instances. For each security group you have to set some rules for incoming traffic to your instance as well as set some rules for outgoing traffic to your instance.
resource "aws_security_group" "allow_http" {
name = "allow_http"
description = "Allows http and ssh"
vpc_id = "${aws_vpc.main.id}"
ingress {
description = "HTTP allow"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "ssh"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_HTTP"
}
}
--> Creating an EBS volume
Amazon Elastic Block Store (EBS) is a block-level storage device designed to be compatible with Amazon Elastic Cloud Compute (EC2). EBS provides a range of options that allow you to optimize storage performance and cost for your workload.
resource "aws_ebs_volume" "ebs_volume1" {
availability_zone = "${aws_instance.instance_ec2.availability_zone}"
size = 1
tags = {
Name = "ebs_volume1"
}
}
--> Creating an EC2 instance
An EC2 instance is a virtual server in Amazon's Elastic Compute Cloud for running applications on the Amazon web services infrastructure.Amazon EC2 provides different instance types to enable you to choose the CPU, memory, storage, and networking capacity that you need to run your applications.
resource "aws_instance" "instance_ec2" {
depends_on = [aws_key_pair.keygen,
aws_security_group.allow_http,
]
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "me"
security_groups = ["allow_http"]
tags = {
Name = "OS1"
}
connection {
type = "ssh"
user = "ec2-user"
private_key = file(C:/Users/punit/Documents/task/keys/myprivatekey.pem")
host = "${aws_instance.instance_ec2.public_ip}"
}
--> Creating S3 Bucket
An Amazon S3 bucket is is a public cloud storage resource. which is available in AWS's Simple Storage Service (S3). We can store objects in here.
resource "aws_s3_bucket" "bucket1" {
bucket = "uniquenessisanillusion"
force_destroy = true
acl = "public-read"
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::uniquenessisanillusion/*"
}
}
}
POLICY
--> Putting object in bucket
resource "aws_s3_bucket_object" "object" {
bucket = "uniquenessisanillusion"
key = "photo.jpg"
source = "C:/Users/punit/Desktop/image/img.jpg"
etag = "C:/Users/punit/Desktop/image/img.jpg"
depends_on = [aws_s3_bucket.bucket1,
]
}
output "test1" {
value = "aws_security_grp.allow_http"
}
--> Attaching an EBS volume
resource "aws_volume_attachment" "ebs_att" {
depends_on = [aws_ebs_volume.ebs_volume1,
aws_instance.instance_ec2,
]
device_name = "/dev/sdh"
volume_id = "${aws_ebs_volume.ebs_volume1.id}"
instance_id = "${aws_instance.instance_ec2.id}"
}
--> Using null_resource
a null resource is used for implementing the standard resource lifecycle but takes no further action.
resource "null_resource" "null1" {
depends_on = [aws_volume_attachment.ebs_att,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/punit/Documents/task/keys/myprivatekey.pem")
host = "${aws_instance.instance_ec2.public_ip}"
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/vimallinuxworld13/multicloud.git /var/www/html/"
]
}
}
--> Attaching the cloudfront with S3
Amazon cloudfront is a Control Delivery Network (CDN) that delivers data, videos, applications, and APIs to customers globally. It has Low Latency , High Security and High speed.
resource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = "${aws_s3_bucket.bucket1.bucket_regional_domain_name}"
origin_id = "s3-bucket1-bucket"
s3_origin_config {
origin_access_identity = "origin-access-identity/cloudfront/ABCDEFG1234567"
}
}
enabled = true
is_ipv6_enabled = true
comment = "Some comment"
default_root_object = "index.html"
logging_config {
include_cookies = false
bucket = "mylogs.s3.amazonaws.com"
prefix = "myprefix"
}
aliases = ["mysite.example.com", "yoursite.example.com"]
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "s3-bucket1-bucket"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
The above is one approach we can take to complete the given task. We can take many others styles and approach to complete the given task.
THANK YOU FOR READING!