AWS EKS
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service.
EKS is the best place to run Kubernetes for several reasons. First, you can choose to run your EKS clusters using AWS Fargate, which is serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Second, EKS is deeply integrated with services such as Amazon CloudWatch, Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC), providing you a seamless experience to monitor, scale, and load-balance your applications. Third, EKS integrates with AWS App Mesh and provides a Kubernetes native experience to consume service mesh features and bring rich observability, traffic controls and security features to applications. Additionally, EKS provides a scalable and highly available control plane that runs across multiple availability zones to eliminate a single point of failure.
EKS runs upstream Kubernetes and is certified Kubernetes conformant so you can leverage all benefits of open source tooling from the community. You can also easily migrate any standard Kubernetes application to EKS without needing to refactor your code.
NODES
Worker machines in Kubernetes are called nodes. Amazon EKS worker nodes run in your AWS account and connect to your cluster's control plane via the cluster API server endpoint. You deploy one or more worker nodes into a node group.
NODE GROUPS
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
Task to Perform
Create a multinode Kubernetes cluster. In that cluster we have to create
1. PVC (Persistent Volume Claim)
2. PV (Persistent Volume)
3. SC (Storage Class)
4. Deploy a WordPress blogging site
5. Deploy a MySQL database connected to WordPress site
6. Create a load balancer to distribute the load between the pods and for connecting to WordPress site.
7. Create a EFS Volume and mount it to MySQL database for making data persistent.
8. Create a fargate cluster on AWS EKS
Prerequisites
1. Have an AWS account.
2. Create an IAM user with admin role.
3. Install eksctl and kubclt software and set path of it in environmental variables
4. Install AWS CLI
Now let’s go to perform our Task
We can access AWS EKS by three methods
1. Web UI
2. API
3. CLI
Here, we are going to use AWS EKS with CLI.
We can create our AWS EKS cluster by using “aws eks” command but it is not good to create cluster by “aws eks” command because it is very manual, we have to create each and everything by own like VPC, Node groups, security groups, launch instances, install software in those instances, etc.
Eksctl
So, for making this whole process easy and fast we will use “eksctl” commend. This command provides us automation for deploying our cluster. And this command use cloudformation service of AWS for making our cluster. That is why we need to install ‘eksctl’ software and for using this software command in CLI we must provide the path of this software in environment variables.
Kubectl
After creating cluster, we need to do our stuff there so for doing that we need to install one more software name ‘kubectl’. And same as ‘eksctl’ software we need to give this software path to environment variables for using “kubectl” command in CLI.
For convenience install both software ‘eksctl’ and ‘kubectl’ in same folder so you don’t need to give multiple path in environment variables.
After setting path in environment variable open you ‘cmd’ and type
eksctl version
If this output come (without error) then your software and path set successfully.
CLOUDFORMATION
Cloudformation is a service provide by AWS for automation. This service has internal connectivity to almost all the services provided by AWS, that is why it is used to create AWS EKS cluster by “eksctl” command.
AWS configure
Now first you have to login to your AWS account with CLI so, for configuring yourself use “aws configure’ command to configure or login to your AWS account.
aws configure
Give your access key ID, secret access key, region name in which you want to work (here I give ap-south-1) and output format like json or yaml
Now it’s time to create our AWS EKS cluster.
So, for that we need to type some code. In this code we need to type only those things which we want in our cluster like how many node groups, how many nodes in that node groups, our key name for doing ssh, like this we have to declare which things we want in our cluster.
After writing this code run this file by “eksctl create cluster -f <file name in which code is written>” command.
NOTE: 1. Don’t use < > symbols when run this command, only just write file name.
2. save this code file with “.yml” extension
eksctl create cluster -f cluster.yml
Now login to your AWS console with Web UI and go to AWS EKS services, then there you will see your cluster status is ‘creating’.
It may take upto 15 to 20 minutes or more to create so be calm, but in between keep checking your cluster after 5-5 minutes and for checking you can click refresh icon and when you see status of cluster is ‘Active’
Then open a new CLI terminal and run given below command
aws eks update-kubeconfig --name <your cluster name> # Here, my cluster name is “shashikantcluster”
And after doing this you can check your AWS console, in your EC2 services, AWS EKS created some worker nodes with instance type you have chosen.
So, finally after your cluster deployed successfully. You can see is there any pod running on your nodes by this below command
Kubectl get pods
And you will see, there are no resources found in default namespace
You can run “kubectl get nodes” command to see your how many worker nodes are there and you can also run “kubectl get nodes -o wide” this command for seeing some extra information about nodes.
And for seeing name spaces in your cluster you can run “kubectl get ns”
kubectl get nodes Kubectl get nodes -o wide
kubectl get ns
For creating your own namespace you can run “kubectl create namespace <your namespace name which ever you want to give>” this command
Here I use “myns” as name of my namespace.
Kubectl create namespace myns
Check again your namespace created or not by “kubectl get ns” command
Kubectl get ns
For creating your namespace as a default namespace run this below command
kubectl config set-context --current --namespace=myns
NOTE: you must give namespace name whatever you have given.
Create EFS Volume for making data persistent in pods
After doing this setup go to your AWS web UI console and search for EFS volume. And in this EFS volume create a new EFS volume in same VPC, which eksctl created for you, eksctl also create a VPC for AWS EKS and all the worker nodes launch in same VPC and everything you do in AWS EKS all the things run and create in same VPC.
That is why you must create EFS volume in same VPC which is created by eksctl. And check your security group from your worker nodes and give the same security group here, and this security group also created by eksctl command that is why eksctl is very highly automated command.
We are creating EFS volume because we can attach one EFS volume to multiple instances or we can say multiple worker nodes.
Now again go to your CLI and run below given command
aws ec2 describe-instances --query Reservations[*].Instances[].["PublicIpAddress"]
By running this command you will get all the public IP addresses of your worker nodes
NOTE: if other instances are also running simultaneously with your worker nodes in EC2 then you will also get their public IP addresses so, there you have to sort IP addresses that which IP is your worker node IP or which IP is your other instance IP and this you can do by going to web UI and sort your worker nodes by searching instances in same VPC which is created by eksctl.
Here, I have only worker nodes are running in my EC2 so, these all IP’s are my worker nodes public IP’s
So, now we have to do ssh login in all these worker nodes because we need install a software called ‘amazon-efs-utils’ in all the worker nodes for our EFS volume.
For doing ssh login to instance / worker node we need a key downloaded in our system and this is the key which we have attached to our nodes when we create cluster
Here, the key name which I have attached to my nodes is “mykey”. You will write your key name while doing ssh login.
ssh -i mykey.pem -l ec2-user <IP Address>
Now after login to node do root login and install ‘amazon-efs-utils’ software, by this command
sudo su – root # for root login
yum install amazon-efs-utils -y # for installing software
- worker node 1
Likewise do this same thing in all your worker nodes.
- worker node 2
- worker node 3
- worker node 4
- worker node 5
EFS provisioner
write a code to create a provisioner for our MySQL pods here we are using EFS volume for storing data, so we need to create an efs provisioner for our MySQL pods
In this code of efs provisioner we have to chance only ID of EFS volume, and this ID we can get by going to AWS console, in EFS dashboard the EFS volume which we have created previously, give that ID of EFS volume here where highlighted in file and we need to chance nfs server also, we can change this by giving EFS volume ID before ‘.efs.ap-south-1.amazonaws.com’.
- Now create a rbac file
- Now create a storage for MySQL database, in this we need to create SC and PVC
NOTE: For complete code visit my Github repository link in bottom.
- Now for deploying MySQL database which is connected to WordPress site.
NOTE: For complete code visit my Github repository link in bottom.
- And finally deploy WordPress site by this code
NOTE: For complete code visit my Github repository link in bottom.
Kustomization File
There are many files to run one by one to deploy our WordPress site so it is harder to run them one by one so, for not doing this thing manually we can automate this process by making a Kustomization file.
And for MySQL database we need to create a password and this password usually we write in a secret so nobody can see our password. We need to create this secret manually by CLI but in Kustomization file we can also create a secret for our MySQL database server.
And the good thing about Kustomization file is first it run all the file one by one automatically and second if you create secret using Kustomization file then only you know the password of MySQL database so, this is good for security.
After creating this Kustomization file run it on CLI by “kubectl create -k .”
kubectl create -k .
NOTE: 1. you must run this command from that folder only in which all your files are present, files which include efs provisioner, rbac, create storage, deploy MySQL, deploy WordPress and Kustomization file
2. all these files format is yaml so extension of files should be ‘.yml’ only
You can see your pods are running with MySQL database and with WordPress site and all your data will be stored in EFS volume so it is persistent and safe because if suddenly pods goes down then deployment again create pods but because of your data is not in the pods, the is in EFS volume and EFS volume is mounted to all the worker nodes and pods are running on worker nodes so, ultimately pods are getting data from EFS volume so no data will be loss if any pods goes down. All the data will be safe in EFS volume and pods are getting data from it.
kubectl get pods # for getting information about pods kubectl get pods -o wide # for getting some more information about pods
for seeing PVC , PV and SC of database
kubectl get pvc kubectl get pv kubectl get sc
By this command “kubectl get all” we can get all the information about our cluster like how many pods are running, how many services are running and our deployments like provisioner, WordPress, and MySQL.
kubectl get all #for seeing normal information about cluster
kubectl get all -o wide #for getting some extra information about cluster
kubectl get all -o wide
- It also creates a LoadBalancer for connecting to WordPress site and to distribute the traffic among different different pods.
By that loadbalancer url we can go to our Wordpress site
- Here we have to do some customization according to our need like language, site name, user name, password, etc.
- Here you have to give your site a name, and your username and password
- Now login to your account
- This the dashboard or we can say console of your WordPress site.
- Write a blog or article here and publish it.
- This is your published article or blog.
Now go to CLI and run a command “kubectl get all” and you see there that all of your pods and services are running well. But if you want to delete any pod or by some reasons your pods go down then your deployment again creates that pods.
kubectl get all # for seeing all information about the pods and services
kubectl delete pod wordpress-d5b76766c-14ph6 # here I intentionally deleted my WordPress pod to show the demo
Here you can see my deployment again create that WordPress pod which I have deleted, with some new ID without losing any data. And you don’t have to again configure your site because all your pod data go to EFS volume and when pod deleted your data remain safe in EBS volume, and when pod again created by deployment EFS volume is already mounted to it and pod take all data from EFS volume. It means all your data remain safe if any of your pod deleted or goes down.
kubectl get all
After deleting pod go to your Web page of WordPress site, and there you can see your data won’t loose even after you have deleted your WordPress pod because AWS EKS use DOCKER technology to create the container in your pod, and Docker can launch a container within a second. That is why there will be no downtime come in your website there will be always 100% uptime.
For deleting all the services and pods in your cluster run “kubectl delete all --all” command.
kubectl delete all --all
Even after deleting all the services and pods you don’t loose your data because the service which provide data to pods are PVC, PV and SC. This command “kubectl delete all --all” don’t delete your PVC, PV and SC from your cluster and all your data will remain safe in your EFS volume.
For deleting your cluster run “eksctl delete cluster -f <file name from which you have created cluster>” command.
eksctl delete cluster -f cluster.yml
NOTE: you must give that file name from which you have created cluster. Here, I created my cluster from ‘cluster.yml’ file.
NOTE: Even after deleting your cluster you don’t loose your data which is in EFS volume because this command doesn’t delete your EFS volume.
For deleting data permanently you have to manually delete the EFS volume from your AWS console.
AWS EKS Fargate cluster
AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers. Amazon EKS integrates Kubernetes with AWS Fargate by using controllers that are built by AWS using the upstream, extensible model provided by Kubernetes.
With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run containers. This removes the need to choose server types, decide when to scale your node groups, or optimize cluster packing.
You can control which pods start on Fargate and how they run with Fargate profiles, which are defined as part of your Amazon EKS cluster.
Amazon EKS integrates Kubernetes with AWS Fargate by using controllers that are built by AWS using the upstream, extensible model provided by Kubernetes. These controllers run as part of the Amazon EKS managed Kubernetes control plane and are responsible for scheduling native Kubernetes pods onto Fargate. The Fargate controllers include a new scheduler that runs alongside the default Kubernetes scheduler in addition to several mutating and validating admission controllers. When you start a pod that meets the criteria for running on Fargate, the Fargate controllers running in the cluster recognize, update, and schedule the pod onto Fargate.
Each pod running on Fargate has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another pod.
- AWS Fargate is not a global service it is available in only some regions of AWS. You can check regions of AWS Fargate services in AWS Fargate documentation of AWS.
- In this demo I am creating my cluster in ap-southeast-1 Asia pacific (Singapore).
Here, we go for creating AWS Fargate cluster
- For creating Fargate cluster first we have to configure our self in AWS CLI with the region in which we want to create our Fargate cluster. For configuring run “aws configure” command and give the access key ID, secret key, your region name in which you have to create your Fargate cluster and output format.
- If you are already configured to your AWS CLI with your Access key ID, secret key, region and output format then you have to only change the name region name in which you have to create Fargate cluster.
Now, write a code, and describe in that code what you want in your Fargate cluster. Here, I use yaml format to write the code, but you can also use Terraform to write code.
Now open CLI and run this file by “eksctl create cluster -f <file name in which you have written the fargate cluster code>” command
eksctl create cluster -f fcluster.yml
NOTE: you have to use that file name which you have given your file of Fargate cluster code.
Now you can check in your AWS console in your selected region, your Fargate cluster is creating in your selected region, and this you can see in AWS EKS services.
And it may take up to 10 to 20 minutes to create Fargate cluster, and keep checking your cluster after 5 – 5 minutes and when you see cluster status is “Active” then open a new CLI run “aws eks update-kubecofig --name <name of your fargate cluster>” command
aws eks update-kubeconfig --name far-lwcluster #give your fargate cluster name
After status is Active of your Fargate cluster you can check it by CLI by
eksctl get cluster
We can check the number of nodes in fargate cluster, nuber of pods are running by
kubectl get nodes kubectl get nodes -o wide # for some extra information about nodes
kubectl get pods
When we create some pod in Fargate cluster first Fargate cluster create a node for it then launch the pod in that node. And this node is automatically created by Fargate cluster we don’t need to create node by own only just we have to create pods, Fargate cluster automatically scale nodes and launch new nodes and create pods in that node.
Here first Fargate cluster launch new node and then it launches this pod in that node.
So, here we can see our Fargate cluster first launch a new node for our pod then it launches the pod in that newly created node.
We can also expose this pod to type called loadbalancer for connecting to this pod, by running “kubectl expose pod/myweb --type=LoadBalancer --port=80”
kubectl expose pod/myweb --type=LoadBalancer --port=80 #you should give that pod name which you have given to your pod
And by the URL of this LoadBalancer you can connect to this pod.
- Now you can delete this Fargate cluster same like as you have deleted AWS EKS cluster, by running “eksctl delete cluster -f <file name from which Fargate cluster is created>” this command
eksctl delete cluster -f fcluster.yml #you should give that file name which you have given to Fargate cluster file while creating this Fargate cluster
That's all
For codes please visit to My Github repository.
- Any suggestions are most welcome!
- Have a query feel free to ask
Thank you for Reading
Well deserved 👌👏
Congratulations 🎉👏
Congrats