Amazon EKS Training
We had a great opportunity to attend this expertise level training on Amazon Elastic Kubernetes services(EKS). Hereby below i would summarize the whole topics and concepts we learnt in the entire training under the world record holder Mr.Vimal Daga sir.
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane/Master. We can launch a EKS cluster in AWS with any of the services like EBS,ELB,EFS,EC2,VPC for storage,networking,computing power and much more for every needs also the whole cluster can be managed by the AWS thats why this product so popular in industries and a much growing one it can reduce the works in industries with much more efficiency. Now we have seen the overview of EKS lets start the topic with kubernetes.
Kubernetes is an container-orchestration system for automatic computer application deployment,scaling, and management. It provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts/nodes. It works with a range of container tools, including Docker. Many cloud services offer a Kubernetes-based platform or infrastructure as a service or kubernetes as a service on which Kubernetes can be deployed as a platform-providing service. One such provider is amazon and the product is EKS. Kubernetes can be used to launch pods on top of containers and manage and monitor them there are different components in kubernetes like kube scheduler, kube controller,kubelet,kubectl in the master and slave to manage and monitor pods on the go.There are also different services for every need will discuss them on the go.
There is a web Gui in the amazon eks services page which can be used to launch and manage the EKS clusters but in the growing world of automation and dynamic needs the CLI would be the best one rather than the GUI because CLI provides many options and features which industry and real use cases need. To use EKS in CLI we need a program one such famous industry level application is EKSCTL which we can use it to create ,manage the eks clusters and much more. To install it in your system click - EKSCTL Installation.
To get started you must need a aws cli software too make sure you have it and also before configuring the aws go to IAM in aws and create seperate user in IAM with admin power and use it to do the aws configure in the CLI after doing this part now we are ready to use eks and eksctl. First we would create a simple cluster to understand the workings of the eks, so we could create a cluster using a yml file and run it in the cli.
apiversion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: mynewcluster kind: ClusterConfig nodeGroups: -name: node1 desiredCapacity:2 instanceType: t2.micro
After creating the above file save it and run it using the command: eksctl create cluster -f samplecluster.yml after running this command and after the success message in the CLI now go to the aws gui there we can see in EC2 2 instances created and running with amazon linux AMI with security and everything configured for the instances, then we can see storage created in EBS and also a CloudFormation has been created for you with a new subnet, you can now see the power of the eksctl which configures everything for you on the go with some simple commands.
Also we can start our own minikube in our base OS and we can update the config file of the eks cluster with our base cluster using the command: aws eks update-kubeconfig thats it after running this command our own kubectl is configured to manage the whole eks cluster now we can use our kubectl commands to run and manage everything on the EKS cluster which we now started. The architecture looks like the below image
Then we learnt about the concepts of load balancer which gives us public IP and also manages the traffic to the pods and manages the load and also gives a single IP to clients so no one can see downtime due to any traffic or some circumstances . These are managed by the ELB in the AWS. Also to expose our pods so that outside people can connect we need to simply do the expose command in eksctl and thats it a ELB service will be created for you which will manage everything for you. Then we created a PVC which will automatically create a PV and get storage from our storage class which is our ebs storage. Also we can create a storage class with IO volume which is more effective type of volume which has a retain policy rather than the default one which has delete policy we can create everything using the yml file and run in the cli. After this to make our storage class default we need to run the kubectl edit sc command and edit the annotation line and thats it our new storage class is now ready. And Finally the whole cluster can be deleted with the eksctl delete cluster option.
Now using the yml file we can specify different flavors of instances which we want to launch inside our node group like t2.micro or t2.small thus using this we can launch any number of instances with any flavor for our need also we can set maximum and minimum limit of nodes in a nodegroup so that less or more than that number of instances cannot be launched in that particular node group,we can create own or spot instances, also we can scale them using the eksctl scale nodegroup command to increase or decrease the pod and pod limits. Also there is a limitation that certain types of instances provide only certain number of network cards also known here as ENI cards so there is a limitation to number of pods which can be run in a instance, for example in t2.micro only maimum of 4 pods can be run as there are only support of 4 network cards. Also the eksctl creates a subnet and DHCP which sets a IP range so only that number of ip addresses are possible beyond that any service would fail.
We then learnt about the Fargate service which is a subservice of ECS in the AWS, fargate is used to create a serverless architecture and we can integrate it with the EKS. This integration is based on fargate profiles also we can write in a yml file while launching any kubernetes application. The beauty of the fargate is that it creates and manages instances on the go i,e whenever a need arises the fargate creates and provisions instances on the Go.
Then we came to know about the helm which is a package manager for kubernetes here packages are called charts and they have kubernetes ready applications which can be installed and used directly in the kubernetes. To install helm and use it from our base os cli we need download helm and tiller and set environment variables for them. Helm is a client program and tiller is server for it so before using the helm we need to configure tiller and we can configure by running the below commands
$ helm init $ helm repo add stable https://kubernetes-charts.storage.googleapis.com/ $ helm repo list $ kubectl -n kube-system create serviceaccount tiller $ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller $ helm init --service-account tiller
After doing these we installed prometheus and grafana inside a namespace using the helm and then monitored the metrics of the k8s using the cluster monitoring for kubernetes dashboard.
Also we created a provisioner class to get our PV volume from efs we used efs provisioner in the yml file and created and configured our nfs server using the create rbac.yml file and then we created a storage class and mounted them. Thus we can get a persisten storage using these concepts also in the amazon linux we need efs-utils tools so either we can manually go inside the instances and run the yum install command to install them or we can use any automation methods to install them inside the instances.
Finally sir showed us a project in which we created two yml files one for wordpress and other for mysql in which we configured services like deployment, loadbalancer, storage concepts like PVC, labels/selectors, secret and then a efs which is mounted to the var/www/html folder to make our data persistent. Then we made a kustomization file to run the above two files simultaneously in the cluster and after this just run the kubectl create -k command and thats it the whole setup is configured and if the pod delete or fail our deployment will automatically create another pod and data will be retained as the data in our pods is persistent also we can run this in fargate using the fargetprofiles option by mentioning our cluster name in the yml file and to update our fargate in our kubectl run the aws eks update-kubeconfig command with the cluster name also note that fargate is available only in selected regions so be cautious while lauching it. Thus the fargate only runs the master and whenever need arises they will provide slaves on the fly and they would also do auto scaling and many other things.
Thus i conclude my article we have also learnt many other thought provoking topics in this training and also learnt many new tools and functions of them thus eks integrated with fargate is a powerful setup as eks itself is a demanded service on the market.....