AWS EKS
This article features the deployment of technologies on AWS EKS
In this modern world which is moving very fast there is an huge need of automation in the technology.Be it the website development or the AI field, everything needs automation.
Now it has become the ERA of Automation.To cope up with this a separate field in the technological community has arisen that is
DEVOPS
DevOps is a set of practices that works to automate and integrate the processes between software development and IT teams, so they can build, test, and release software faster and more reliably. The term DevOps was formed by combining the words “development” and “operations” and signifies a cultural shift that bridges the gap between development and operation teams, which historically functioned in siloes.
In a recent poll participants indicated where their organizations fit on the DevOps continuum:
- 55% Bottom Left
- 26% Bottom Right
- 14% Top Left
- 5% Top Right
Devops has many tools like kubernetes, git, docker, jenkins, ansible, nagios etc.
This article is based on Kubernetes which is a core devops tool.
Firstly What is kubernetes?
Kubernetes (also known as k8s or “kube”) is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.
In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters.
Kubernetes clusters can span hosts across on-premise, public, private, or hybrid clouds. For this reason, Kubernetes is an ideal platform for hosting cloud native applications that require rapid scaling, like real-time data streaming through Apache Kafka.
Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything google runs on containers. (This is the technology behind Google’s cloud services).
For the kubernetes setup on our system we require resources like ram, cpu, gpu, networking, etc.
We can setup kubernetes cluster on our system by using Minikube.
After setting up the cluster that's by installing minikube, we have to create nodes like master and slave, update config file, have install and setup kubectl for management of the cluster etc.
There is lot of pre-process behind setting up a kubernetes cluster. This requires more time and may lead to server downtime in crucial cases.
Also think that you have a new startup, you may not know whether it may be successful our unfortunately turned out be a failure. You cannot invest millions on new startup that might failure or success.
This is where cloud computing comes into play. Cloud Services like AWS provide the required resources for you on hourly pricing.
AWS has services like EC2(computing), EFS(file storage), S3(o), Load balancer, security systems etc. this gives us the ability to launch our systems on cloud.
You can destroy them whenever you want to stop the pricing.
NOW what is EKS?
Amazon EKS is a managed service that helps make it easier to run Kubernetes on AWS. Through EKS, organizations can run Kubernetes without installing and operating a Kubernetes control plane or worker nodes. Simply put, EKS is a managed containers-as-a-service (CaaS) that drastically simplifies Kubernetes deployment on AWS.
Benefits of Amazon EKS: Why use EKS?
Through EKS, normally cumbersome steps are done for you, like creating the Kubernetes master cluster, as well as configuring service discovery, Kubernetes primitives, and networking. Existing tools will more than likely work through EKS with minimal mods, if any.
With Amazon EKS, the Kubernetes control plane--including the backend persistence layer and the API servers--is provisioned and scaled across various AWS availability zones, resulting in high availability and eliminating a single point of failure. Unhealthy control plane nodes are detected and replaced, and patching is provided for the control plane. The result is a resilient AWS-managed Kubernetes cluster that can withstand even the loss of an availability zone.
Organizations can choose to run EKS using AWS Fargate--a serverless compute engine for containers. With Fargate, there’s no longer a need to provision and manage servers; organizations can specify and pay for resources per application. Fargate, through application isolation by design, also improves security.
And of course, as part of the AWS landscape, EKS is integrated with various AWS services, making it easy for organizations to scale and secure applications seamlessly. From AWS Identity Access Management (IAM) for authentication to Elastic Load Balancing for load distribution, the straightforwardness and convenience factor of using EKS can’t be understated.
Lets Start Creating our EKS cluster.
The simplest way to look at EKS is that it’s AWS’ offering for Kubernetes-as-a-service. As mentioned, EKS significantly simplifies the management and maintenance of highly-available Kubernetes clusters in AWS.
There are two main key components:
Amazon EKS: Two Main Components
Control Plane
The Control Plane consists of three Kubernetes master nodes that run in three different availability zones (AZs). All incoming traffic to Kubernetes API comes through the network load balancer (NLB). It runs on the virtual private cloud controlled by Amazon. Hence, the Control Panel can’t be managed directly by the organization and is fully managed by AWS.
Worker Nodes
Worker Nodes run on the Amazon EC2 instances in the virtual private cloud controlled by the organization. Any instance in AWS can be used as a worker node. These worker nodes can be accessed through SSH or provisioned without automation.
A cluster of worker nodes runs an organization’s containers while the control plane manages and monitors when and where containers are started.
Due to the flexibility of the EKS layout, organizations can deploy a Kubernetes cluster (an EKS cluster) for each application. Organizations can also use just one EKS cluster to run more than one application via Kubernetes namespaces and AWS IAM configurations.
Without EKS, organizations would have to run the Control Plane and Worker Nodes. Through EKS, worker nodes are provisioned through a single command in the EKS console, CLI, or API, while AWS provisions, scales, and manages the Control Plane securely. The result is that organizations are freed from the operational burden of running Kubernetes and maintaining the infrastructure.
Amazon EKS: How It Works
Organizations can granularly control access permissions to Kubernetes masters by assigning RBAC roles directly to IAM entities. By doing this, you can easily manage Kubernetes clusters through standard tools like kubectl.
Another option is to use PrivateLink for those who want to access Kubernetes masters via Amazon VPC. The Amazon EKS endpoint and Kubernetes masters will appear as an elastic network interface with private IPs in the Amazon VPC when using PrivateLink.
But in this article we use option A
Softwares required:
>AWS cli software :https://docs.aws.amazon.com/cli/latest/userguide/install-windows.html
>kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/
>EKSCTL :
After installing check the versions through the command
aws --version eksctl version
Now lets start
we have to configure the aws profile .
Go to aws console > IAM > Users > add user >
Give AdministratorAccess
Save the keys in a file for future requirement.
#Now go to cmd and configure the profile. aws configure
#to check for clusters eksctl get cluster
Types of kubernetes cluster on AWS
1.Creating the EKS cluster
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: cluster
region: ap-south-1
nodeGroups:
- name: ng1
desiredCapacity: 2
instanceType: t2.micro
ssh:
publicKeyName: key2
- name: ng2
desiredCapacity: 1
instanceType: t2.small
ssh:
publicKeyName: key2
#to create cluster eksctl create cluster -f cluster.yml
It takes around 20 min to create the cluster.
EC2 instances created as nodes.
We use kubectl to manage the kubernetes. For this we have to update the config file.
aws eks update-kubeconfig --name=clustername #for info about cluster kubectl cluster-info #for info about nodes kubectl describe modes "nodename"
We can run our pods, deployment etc in a seperate environment by creating the namespace
kubectl create namespace kube-ns #to change the ns to our ns kubectl config set-context --current --namespace=kube-ns
We can also create mixed instances using spot instances which are scalable.
2.Creating fargate cluster.
AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.
In a simple word fargate cluster launches nodes dynamically when needed.
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: far-cluster region: ap-southeast-1 fargateProfiles: - name: fargate-default selectors: - namespace: kube-system - namespace: default
Thus fargate cluster is created.
Integration of mysql and wordpress.
In this job we have used efs as the storage class.
Creating EFS
Go to EFS > create file system
Note we have to create efs in the same vpc that of the cluster and the security group should also be same as that of ec2 instance.
There are some files that has be run for assigning the efs storage class.
kind: Deployment apiVersion: apps/v1 metadata: name: efs-provisioner spec: selector: matchLabels: app: efs-provisioner replicas: 1 strategy: type: Recreate template: metadata: labels: app: efs-provisioner spec: containers: - name: efs-provisioner image: quay.io/external_storage/efs-provisioner:v0.1.0 env: - name: FILE_SYSTEM_ID value: efs_id - name: AWS_REGION value: ap-south-1 - name: PROVISIONER_NAME value: yash/aws-efs volumeMounts: - name: pv-volume mountPath: /persistentvolumes volumes: - name: pv-volume nfs: server: efs_dns path: /
--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nfs-provisioner-role-binding subjects: - kind: ServiceAccount name: default namespace: wpns roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: aws-efs provisioner: yash/aws-efs --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: efs-wordpress annotations: volume.beta.kubernetes.io/storage-class: "aws-efs" spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: efs-mysql annotations: volume.beta.kubernetes.io/storage-class: "aws-efs" spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi
The mysql and wordpress deployment files can be got from the below github link:
Run the below kustomization.yml file:
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization secretGenerator: - name: mysql-pass literals: - password=redhat resources: - efs-provisioner.yml - efs-rbac.yml - efs-storage.yml - deploy-mysql.yaml - deploy-wordpress.yaml
Next we have to do some installations inside the worker nodes for efs.
Install a software called amazon-efs-utils from yum
Do this on all nodes.
Finally we integrated mysql with wordpress.
Integration of Prometheus and Grafana
In eks we can get the softwares from the helm charts
helm charts setup link:
Initializing helm
#run helm init
Now to add repo run the below code.
helm repo add stable https://kubernetes-charts.storage.googleapis.com
Now run the below commands to launch prometheus.
# kubectl -n kube-system create serviceaccount tiller # kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller # helm init --service-account tiller # kubectl get pods --namespace kube-system # kubectl create namespace prometheus # helm install stable/prometheus --namespace prometheus --set alertmanager.persistentVolume.storageClass="gp2" --set server.persistentVolume.storageClass="gp2"
Now to download grafana :
# kubectl create namespace grafana # helm install stable/grafana --namespace grafana --set persistence.storageClassName="gp2" --set adminPassword=toor --set service.type=LoadBalancer
Now run
kubectl get svc -n gragana
there you will get a link like this : http://ad1287e77479340539738ccf061b1a2e-538395078.ap-south-1.elb.amazonaws.com/ This is the link of grafana
Now login using admin and the password that you gave
You can build the dashboards like this.
For building dashboard refer by previous article
Thus we have learnt the core topics of EKS and have learnt to implement that on integration of technologies.
Queries and Suggestions are acceptable.
Thank you
developer productivity tools
EKS infrastructure requires security verification at deployment. Before pushing container images, teams must verify code quality and security. Real-time IDE-based verification ensures vulnerabilities don't make it into production. Tools like Codaro catch issues before they reach clusters.