Amazon EKS - Deploying Kubernetes Cluster on AWS Cloud

Amazon EKS - Deploying Kubernetes Cluster on AWS Cloud

AWS:

AWS (Amazon Web Services) is a comprehensive, evolving cloud computing platform provided by Amazon that includes a mixture of infrastructure as a service (IaaS), platform as a service (PaaS) and packaged software as a service (SaaS) offerings. AWS services can offer an organization tools such as compute power, database storage, and content delivery services.

What is EKS?

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.

Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Amazon EKS automatically detects and replaces unhealthy control plane instances, and it provides automated version upgrades and patching for them.

No alt text provided for this image


Working of Amazon EKS 

The below images show how Amazon EKS works.

No alt text provided for this image
No alt text provided for this image

Steps to start Amazon EKS

Following are the steps to start Amazon EKS: 

  • Create an Amazon EKS cluster with the AWS Management Console or AWS CLI or with one of the AWS SDKs. 
  • Launch the worker node which registers the Amazon EKS cluster. 
  • An AWS CloudFormation template can be provisioned that helps configure the nodes automatically.  
  • When the cluster is ready, the user can configure it with the Kubernetes tools which are required for their application to communicate with their cluster. 
  • This Amazon EKS cluster can be used to deploy and manage applications in the same manner as one would do with any other Kubernetes environment. 

A Kubernetes cluster can be created in two ways: 

  • With eksctl: This can be used to install the resources which are required to begin working with Amazon EKS ‘eksctl’. The ‘eksctl’ is a simple command-line utility that can be used to create and manage the Kubernetes cluster on Amazon EKS. This is considered to be one of the quickest and simplest methods of creating a cluster with Amazon EKS. 
  • With the AWS Management Console: This method can be used to create the resources required to start working with Amazon EKS with the help of the AWS Management Console. In this method, the user has to manually create every resource in the Amazon EKS or AWS CloudFormation console. This is considered as a much complicated and time-consuming method of creating and working with Amazon EKS.  

CREATING A KUBERNETES CLUSTER USING AWS EKS:

  1. Configure AWS on CLI
No alt text provided for this image
No alt text provided for this image

2. Although, we can create clusters using AWS command it doesn't provide many options further. Thus, we will use eksctl for creating a cluster.

eksctl is a simple CLI tool for creating clusters on EKS - Amazon's new managed Kubernetes service for EC2. It is written in Go, uses CloudFormation, was created by Weaveworks and it welcomes contributions from the community.

For downloading refer to https://github.com/weaveworks/eksctl

Therefore, set up the path in system variables and check it's version.

No alt text provided for this image
No alt text provided for this image

3. Create a cluster.yml file to create the cluster on Amazon EKS.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig


metadata:
  name: lwcluster
  region: ap-south-1


nodeGroups:
  - name: ng1
    desiredCapacity: 1
    instanceType: t2.micro
    ssh:
        publicKeyName: ec2_key
  - name: ng2
    desiredCapacity: 1
    instanceType: t2.micro
    ssh:
        publicKeyName: ec2_key
  - name: ng3
    minSize: 1
    maxSize: 3
    ssh:
        publicKeyName: ec2_key
    instancesDistribution:
      maxPrice: 0.017
      instanceTypes: ["t2.micro"] # At least one instance type should be specified
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 50
      spotInstancePools: 2

4. To launch the cluster on EKS, run the following command.eksctl create cluster -f cluster.yml

eksctl create cluster -f cluster.yml

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

5. To check if the cluster is created or not, run the following command.

No alt text provided for this image

6. Update the kubeconfig file so that kubectl command can work and we can connect to cluster from the outside world. So, run the following command.

aws eks update-kubeconfig  --name lwcluster

7. Now, kubectl is connected to the EKS cluster we created on AWS.

No alt text provided for this image
No alt text provided for this image


DEPLOYING WORDPRESS AND MYSQL MULTI-TIER ARCHITECTURE ON TOP OF THE EKS CLUSTER:

  1. Create wordpress-deployment.yaml file with the following code snippet.
apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: frontend
  type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: wp-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: frontend
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        env:
        - name: WORDPRESS_DB_HOST
          value: wordpress-mysql
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wp-pv-claim


2. Create mysql-deployment.yaml file with the following code snippet.

apiVersion: v1
kind: Service
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
  clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress-mysql
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-pv-claim


3. Create kustomization.yaml file with the following code snippet.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: mysql-pass
  literals:
  - password=redhat
resources:
  - mysql-deployment.yaml
  - wordpress-deployment.yaml
  


4. Run the following command to deploy.

No alt text provided for this image

5. To access the WordPress site on your browser, use the URL given by LoadBalancer.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


Deploying Jenkins using HELM.

Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.

No alt text provided for this image

Tiller is the service that actually communicates with the Kubernetes API to manage our Helm packages. A companion server component, tiller, that runs on your Kubernetes cluster, listens for commands from the helm , and handles the configuration and deployment of software releases on the cluster.

1.Set up the path for helm and tiller in system variables.

No alt text provided for this image

2. Run the following commands.

# helm init

# helm repo add stable https://kubernetes-charts.storage.googleapis.com/

# helm repo list

# helm repo update

# kubectl -n kube-system create serviceaccount tiller

# kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

# helm init --service-account tiller

 

# kubectl get pods --namespace kube-system
No alt text provided for this image

3. Launch Jenkins on EKS using following command.

No alt text provided for this image
No alt text provided for this image

4. To access the Jenkins WebUI, we need to port-forward the pod.

No alt text provided for this image
No alt text provided for this image

5. To get the password of jenkin's admin from secret, we need the run the following command.

No alt text provided for this image

6. The password stored in the secret is 64-base-encoded. So, first decode it and then login to Jenkins's WebUI.

No alt text provided for this image

7. Now, we can log-in and access Jenkins running on our EKS cluster and deployed using HELM.

No alt text provided for this image


AWS Fargate

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design. This is why customers such as Vanguard, Accenture, Foursquare, and Ancestry have chosen to run their mission-critical applications on Fargate.

If we want to create cluster on EKS using AWS Fargate, use the following code snippet.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig


metadata:
  name: fargate-lwcluster
  region: ap-southeast-1



fargateProfiles:
  - name: fargate-default
    selectors:
     - namespace: kube-system

     - namespace: default

I hope this article was beneficial for you. Hope you like it!

Connect to me on Linkedin -









To view or add a comment, sign in

More articles by Mohd Suhail

Others also viewed

Explore content categories