Amazon EKS TASK
Hello Readers ! Thanks for visiting my page. Successfully completed the task and here is the self reflection of my Amazon EKS training. A special thanks to Mr. Vimal Daga & team LinuxWorld Informatics Pvt Ltd. Wouldn't have been possible without their continuous support.
Let's get started with one of the demanding service in the market at present time i.e. Amazon Elastic Kubernetes Service (Amazon EKS).
What is Amazon-EKS ?
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service.
Customers such as Intel, Snap, Intuit, GoDaddy, and Autodesk trust EKS to run their most sensitive and mission critical applications.
Benefits - High Availability, Secure, Serverless option, Built with the Community.
Use cases - Hybrid Deployment, Batch Processing, Machine Learning, Web Applications.
Why use Amazon-EKS ?
- Security
- Reliability
- Scalability
What is AWS On-Demand Instances ?
AWS On-Demand Instances are virtual servers that run in AWS Elastic Compute Cloud (EC2).
With On-Demand Instances, we pay for compute capacity by the second with no long-term commitments. We have full control over its lifecycle—you decide when to launch, stop, hibernate, start, reboot, or terminate it.
What is Spot Instances ?
A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price.
Since it enable us to request unused EC2 instances at steep discounts, we can lower our Amazon EC2 costs significantly.
Difference between Spot Instance and On-demand Instance in Amazon EC2
In Spot Instance there is no commitment. As soon as the Bid price exceeds Spot price, a user gets the Instance & if the Spot price exceeds the Bid price, Amazon will shut the instance.
In an On-demand Instance, a user has to pay the On-demand rate specified by Amazon. Once they have bought the Instance they have to use it by paying that rate.
So let's look at the practical hands-on part also simultaneously -
cluster.yml
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: lwcluster region: ap-south-1 nodeGroups: - name: ng1 desiredCapacity: 2 instanceType: t2.micro ssh: publicKeyName: mykey - name: ng2 desiredCapacity: 1 instanceType: t2.small ssh: publicKeyName: mykey - name: ng-mixed minSize: 2 maxSize: 5 instancesDistribution: maxPrice: 0.017 instanceTypes: ["t3.micro", "t3.small"] # At least one instance type should be specified onDemandBaseCapacity: 0 onDemandPercentageAboveBaseCapacity: 50 spotInstancePools: 2 ssh:
publicKeyName: mykey
Instances created behind the scene on AWS
Spot-Instances created behind the scene on AWS
Note here Docker is already running because of EKS cluster. Also note here that we can limited pods depending upon instance type and how many network card are attached to it.
For example, In t2.micro only 4 pods can be launched while in t2.small 11 pods can be launched and so on for others instance types too.
Can refer below to know more about instance type & how many pods we can launch in it.
Let's talk about one of the beautiful service called AWS Fargate that provides us serverless architecture -
AWS Fargate is a serverless compute engine for containers that works with both Amazon ECS and Amazon EKS. It makes it easy for us to focus on building and operating our applications whether we are running it with ECS or EKS. Using Fargate we can achieve rich observability of applications.
fcluster.yml
apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: far-lwcluster region: ap-southeast-1 fargateProfiles: - name: fargate-default selectors: - namespace: kube-system
- namespace: default
Whenever we run eksctl command, it is just an automation program. Behind the scene they create some code & send this code to Cloud Formation. And actually Cloud Formation is the one which does everything for us.
Cloud Formation is the one which contact to VPC and create subnet for us. Cloud Formation is the one which contacts to EC2 and launch instance for us and many more things they perform for us behind the scene.
In AWS, we have independent services for everything and if one service would like to communicate with the other service, they require some power or permission. And this kind of permission is known as ROLE.
After cluster configured if we want to access our cluster, as a client (here using eksctl) we must login to AWS which has the power to do so.
Why use EFS instead of EBS ?
It is to be noticed that if we use EBS for persistent storage then we will end up in trouble. Since EKS launch the pods in different DataCenters and we can attach EBS to those only that are in the same DataCenters. So it is recommended to use EFS.
Amazon EFS is a regional service for high availability and durability. Implemented part shown below -
Here's the code of the task performed & its output -
create-efs-provisioner.yaml
kind: Deployment apiVersion: apps/v1 metadata: name: efs-provisioner spec: selector: matchLabels: app: efs-provisioner replicas: 1 strategy: type: Recreate template: metadata: labels: app: efs-provisioner spec: containers: - name: efs-provisioner image: quay.io/external_storage/efs-provisioner:v0.1.0 env: - name: FILE_SYSTEM_ID value: fs-c1b83210 - name: AWS_REGION value: ap-south-1 - name: PROVISIONER_NAME value: himanshu/nfs-eks volumeMounts: - name: pv-volume mountPath: /persistentvolumes volumes: - name: pv-volume nfs: server: fs-c1b83210.efs.ap-south-1.amazonaws.com path: /
create-rbac.yaml
--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nfs-provisioner-role-binding subjects: - kind: ServiceAccount name: default namespace: default roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io
create-storage.yaml
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nfs-efs provisioner: himanshu/nfs-efs --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: efs-wordpress annotations: volume.beta.kubernetes.io/storage-class: "nfs-efs" spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: efs-mysql annotations: volume.beta.kubernetes.io/storage-class: "nfs-efs" spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi
deploy-mysql.yaml
apiVersion: v1 kind: Service metadata: name: wordpress-mysql labels: app: wordpress spec: ports: - port: 3306 selector: app: wordpress tier: mysql clusterIP: None --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: wordpress-mysql labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: mysql strategy: type: Recreate template: metadata: labels: app: wordpress tier: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: efs-mysql
deploy-wordpress.yaml
apiVersion: v1 kind: Service metadata: name: wordpress labels: app: wordpress spec: ports: - port: 80 selector: app: wordpress tier: frontend type: LoadBalancer --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: wordpress labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: frontend strategy: type: Recreate template: metadata: labels: app: wordpress tier: frontend spec: containers: - image: wordpress:4.8-apache name: wordpress env: - name: WORDPRESS_DB_HOST value: wordpress-mysql - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 80 name: wordpress volumeMounts: - name: wordpress-persistent-storage mountPath: /var/www/html volumes: - name: wordpress-persistent-storage persistentVolumeClaim: claimName: efs-wordpress
kustomization.yaml
--- apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization secretGenerator: - name: mysecurebox literals: - password=bGludXg= resources: - create-efs-provisioner.yaml - create-rbac.yaml - create-storage.yaml - deploy-mysql.yaml - deploy-wordpress.yaml
OUTPUT -
What is HELM ?
Helm is a package manager of K8S which helps us to manage kubernetes applications. It is graduated project in the CNCF and is maintained by the Helm Community.
Helm Charts help us define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.
Here are the list of commands I performed as mentioned below -
helm init
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo list
helm search -l
kubectl create ns lw1
helm get pods -n kube-system
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade
helm install --name my-release stable/jenkins -n lw1
helm install stable/prometheus --namespace prometheus --set alertmanager.persistentVolume.storageClass="gp2" --set server.persistentVolume.storageClass="gp2"
kubectl -n prometheus port-forward svc/listless-boxer-prometheus-server 8888:80
helm install stable/grafana --namespace grafana --set persistence.storageClassName="gp2" --set adminPassword=redhat --set service.type=LoadBalancer
Prometheus - An open source monitoring system developed by engineers at SoundCloud in 2012. In order to see get the Prometheus dashboard here, we have to perform port forwarding.
Grafana - An open source analytics and monitoring solution for every database. It allows us to query, visualize, alert on and understand your metrics no matter where they are stored.
Thanks for reading !
If you have any suggestions or doubt feel free to contact or comment below.
Great Work!!
GooD JoB BuDdy !!
great job bhai.............
Bdya bhai 🔥🔥🔥