Kubernetes Multi-Cloud Cluster.
🔷What Is Multi-cloud Kubernetes?
Multi-cloud Kubernetes is the deployment of Kubernetes over multiple cloud services and providers. Kubernetes can also be a way for organizations to efficiently manage multi-cloud architecture.
Most enterprises are already using a multi-cloud strategy. By combining cloud services, organizations can choose the best services for them at the lowest cost. However, multi-cloud can get complex, and this is where Kubernetes comes in. By standardizing workloads on Kubernetes, and leveraging Kubernetes features like Federation, organizations can deploy large scale workloads on multiple clouds with central control.
🔷Multi-cloud Kubernetes Use Cases:-
There are many reasons an organization might adopt a multicloud Kubernetes deployment. Below are a few of the most common reasons.
1. Cloud bursting
Cloud bursting traditionally referred to the use of cloud resources to cover excess workload demands, beyond the capacity of on-premise systems. In a multicloud infrastructure, “bursting” involves using resources from one cloud to supplement the resources of another. This is needed when one cloud offers a better solution, or lower costs for high performance or high throughput workloads.
2. Disaster recovery, backup, and archive
You can use multicloud resources for disaster recovery to achieve greater resilience and availability than in a single cloud infrastructure. By spreading recovery resources across clouds, you reduce the chances that a cloud vendor becomes your single point of failure.
In this type of setup, one cluster is typically responsible for read/write operations and secondary clusters may be read-only. If a host goes down, workloads can be failed over to the recovery resource on another provider.
3. Multi-site active-active
An active-active configuration is similar to a disaster recovery or backup configuration with the exception that all clusters are read/write. This enables you to keep clusters synchronized in real-time and to distribute workloads continuously and immediately. This method is useful for mission-critical applications and services, such as user validation.
To do this practical their are some pre-requisites :-
- Account On Azure Cloud
- Account On AWS Cloud
For Creating Account On Azure Refer Below link :
Azure Free Account: Learn How to Register for Free (k21academy.com)
Create and activate an AWS account (amazon.com)
So lets start the practical demo--------------------------->
✔ Step1:- To implement this practical we have to create kubernetes master-slave cluster for this first we have to launch instances for master and slave.
So here I am launching 1master node and 2 slave nodes on AWS cloud & 1 virtual machine on Azure cloud to do this I am taking help of AWS portal and Azure portal but you can also launch the same instances using Ansible / Terraform.
First I am going to launch instance on AWS cloud--------->
👉Steps to launch instance:-
# Login to AWS portal --> Go to ec2 service --> Click on option called Launch Instances
# Select the AMI type that you want to launch, here I am using Amazon Linux--->
# select the instance type----------->
# put the count of instance that you want to launch here I am launching 3 Instances--------->
# Add storage to launch the instance---------------->
# Add Security Group and write Rules according to your need------------------>
# Add a Tag to instance------------------>
# Now Review and Launch the Instances------------->
Here you can see 1 master and 2 slaves are successfully launched.
✔Step2:- Now let's launch 1 slave on Azure Cloud so that we can achieve multi-cloud setup
For this go to azure cloud and create new virtual machine and resource group
# Fill all the required information like Resource group, virtual machine name, region, Image --> here I am selecting the RedHat Image
# we also have to set Authentication, So here I am selecting password based authentication
# Now choose the disk type according to requirement--------------->
# Now lets do some networking part, here you have to select your subnet, virtual network and public IP so that other system can connect you---------->
# In next step we have to do Management and Advance setting, here I have set the default options
# In next page we have to add Tags to our Virtual machine---------->
# Now review all the options and Launch the Virtual machine-------------->
Here our all required instances launched successfully.
✔Step3:- Configure Kubernetes master node.
# Steps to configure master node------------------->
# kubernetes is based on docker so first we have to install docker on master node and also start and enable the docker services--->
command:- yum install docker -y systemctl start docker systemctl enable docker
# configure cgroup driver----->
vim /etc/docker/docker.json
# after configuring cgroup driver we need to restart the docker service-------->
command:- systemctl restart docker
# configure kubernetes repo to install commands like kubectl, kubeadm, kubelet etc------>
lets know little bit about the kube commands:=
kubeadm : Kubeadm is a tool built to provide kubeadm init and kubeadm join as best-practice “fast paths” for creating Kubernetes clusters.
kubectl : You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs
kubelet : The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of the hostnames.
# install kubelet, kubeadm & kubectl--------------->
command:- yum install kubeadm kubectl kubelet -y
# start and enable kubelet service---------->
command:- systemctl enable kubelet --now
# kubeadm is one who help to pull the docker image so that we can launch pod from that image------------>
command:- kubeadm config image pull
# install the iproute-tc tool , it is helpful for managing the traffic on cluster--------------->
command:- yum install iproute-tc
# set the iptables---------------------------->
command:- vim /etc/sysctl.f/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
# initialize the master node-------------------------->
command:- kubeadm init --control-plane-endpoint "PUBLICIP:PORT" --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem
Note :
** pod-network-cidr= IP range (for pods inside the slave nodes)
** Control plane endpoint = assign the cluster with a public IP with port
** ignore-preflight-errors= Ignoring the unwanted CPU errors and memory errors
commands:- mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # sudo chown $(id -u):$(id -g) $HOME/.kube/config
To connect the nodes of the master and slave we use a flannel. Flannels act as a DHCP server as well as a router in the cluster. It will create a connection between the pods running in the cluster. The flannel works on the underlying network.
# download and configure the flannel----------------------->
command:- kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# token generation------------------------>
command:- kubeadm token create --print-join-command
This command will give you a token which help you to connect all salves to master,
# Now we have to provide this token to all slaves so that they can connect to master
✔Step4:- Configure Slave nodes
Now we have to apply the almost same command on all slaves. so run all the below commands on your each slave.
To do this you can take the help of automation tools like ansible
commands:-
yum install iproute-tc #Installing iproute-tc
yum install docker -y #Install Docker
vim /etc/docker/daemon.json #Changing the driver
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker #Restart docker
systemctl enable docker --now #enable Docker
#Kubernetes Repository:-
vim /etc/yum.repos.d/k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
#Installing the required packages:-
yum install kubeadm kubectl kubelet -y
#Enabling kubelet service:-
systemctl enable kubelet --now
#Configure the iptables:-
/etc/sysctl.d/k8s.conf
vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system
After running all the above commands your slaves are ready to connect to master
Now we have to provide the token to each slave node which we already generated in our master node
Now our cluster is ready let's check ----------->
command:- kubectl get nodes
Here you can see all slave nodes are connected to master (AWS slave & Azure slave)
So here we completed out Task of setting Multi-Cloud Kubernetes Cluster.
https://cloud.netapp.com/blog/gcp-cvo-blg-multicloud-kubernetes-centralizing-multicloud-management