Multi-cloud experiment with Azure Arc Kubernetes and Raspberry Pi  cluster

Multi-cloud experiment with Azure Arc Kubernetes and Raspberry Pi cluster

Overview

Multi-cloud refers to combine several different clouds to host services. It can be public or private cloud, but also on-premises infrastructures. Benefits of Multi-cloud are various and we often read reliability, redundancy and potential cost optimization.

However, one of the key benefit of Multi-cloud is the freedom to choose the service which match the best your actual requirements.

Those requirements are determined by many aspects including business, features, privacy and sla. Maybe Google supports a particular feature on its Kubernetes Service which integrates perfectly with another Google AI inference service that you are using. However, most of your backend, including monitoring and governance solutions are on Azure. 

Multi-cloud allows you to choose the best match for your services.


Experiment Azure Arc with an on-premise Kubernetes cluster

In this article, we experiment a Multi-cloud setup, using Microsoft Azure Arc with an on-premises Raspberry Pi-based Kubernetes cluster.

In this simple tutorial, we will learn how to:

  • setup a Kubernetes cluster mixing Raspberry Pi and general purpose computer
  • create an Azure Arc environment and connect your Kubernetes cluster to it
  • visualize your Kubernetes resources from Azure Portal
  • enable Azure Log Analytics for real-time monitoring on your cluster

This tutorial is based on MicroK8s distribution, but any CNCF certified distribution should be suitable.

Restrictions on ARM64 architectures (as of november 2021)

Azure Arc-enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. When connecting a Kubernetes cluster to Azure Arc, several pods are deployed in an azure-arc namespace to handle connectivity to Azure.

However those pods only support linux/amd64 architectures.

Thus, as Raspberry Pi are based on arm64 architectures, your cluster must contain at least one node supporting amd64 architecture to run those pods (and more for resiliency). Kubernetes master node(s) can remain on Raspberry machines.

About Azure Arc

Azure Arc is a Multi-cloud management layer for all your resources, wherever they are: Azure, AWS, Google Cloud, on-premise, etc.

Azure Arc-enabled Kubernetes is an Azure resource where you can connect your existing Kubernetes cluster, thus enabling management and governance capabilities of Azure on your Kubernetes cluster.

Objective is to easily organize, govern and secure all your Azure and non-Azure resources across datacenters.

More details on https://aka.ms/azure-arc


Setup Kubernetes cluster

In this step, we will create a Kubernetes cluster on a set of machines, using MicroK8s distribution.

If you already have a running Kubernetes cluster, with at least one node supporting linux/amd64 containers, you can jump to the next section.

Prerequisites

  • At least one Raspberry Pi 3 or 4 with +8GB MicroSD card at least with 2GB of RAM
  • At least one computer with an AMD64 architecture at least with 8GB of RAM
  • On all machines, a Linux distribution with Snap installed.

In my case, I used Ubuntu 20.04LTS for ARM64 and Debian Bullseye for AMD64.

Note: Ubuntu Core does not fully support MicroK8s yet as it requires classic confinement.

Install MicroK8S on each node

This tutorial rely on MicroK8s Kubernetes distribution, but any CNCF certified distribution should be suitable.

On each machine, install MicroK8s with snap, and wait for installation completion:

$ sudo snap install microk8s --classic --channel=1.22/stable

$ microk8s status --wait-ready        

Optionally, with Snap channels, you can select the Kubernetes version you want to use:

$ snap info microk8s        

For the user to be allowed to use microk8s command, you need specific permissions:

$ sudo usermod -aG microk8s $USER

$ sudo chown -f -R $USER ~/.kube        

For the group update to take effect, you can logout/login or simply run:

$ su - $USER        

Once installation is completed on every node, we can start to join nodes together and build up the cluster.

In my example, I have 4 machines:

Name      Hardware          Architecture
pizza1    RaspberryPi 4     arm64
pizza2    RaspberryPi 4     arm64
pizza3    RaspberryPi 4     arm64
gandalf   AMD machine       amd64        


Build the Kubernetes clusterwith MicroK8s

Choose a master node for the cluster to host the Kubernetes control plane. In my case, pizza1 will be the master.

Then, for each node you want to join, run the following 2-steps sequence:

  1. From your master node, ask for a token allowing a remote node to join the cluster:

pizza1$ microk8s add-node

$ microk8s add-node
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.1.121:25000/4f9cae1d4f9b9ea0db5a8a/593e114a5
...        

2. As proposed, from the node you wish to join to this cluster, run the joining command:

pizza2$ microk8s join 192.168.1.121:25000/4f9cae1d4f9b9ea0db5a8a/593e114a5        


Afterwards, you should have a ready to use Kubernetes cluster:

$ microk8s kubectl get nodes -o wide

NAME   STATUS  AGE  VERSION         OS-IMAGE             KERNEL-VERSION

pizza2  Ready  19m  v1.22.2-3+08d7  Ubuntu 20.04.3 LTS   5.4.0-1042-raspi
pizza1  Ready  22m  v1.22.2-3+08d7  Ubuntu 20.04.3 LTS   5.4.0-1042-raspi
gandalf Ready  12m  v1.22.2-3+9ad   Debian GNU/Linux 11  5.10.0-9-amd64
pizza3  Ready  18m  v1.22.2-3+08d7  Ubuntu 20.04.3 LTS   5.4.0-1042-raspi        

Note: MicroK8s embeds kubectl command and configuration. If you prefer to use host's kubectl command, you can export kubeconfig with

$ microk8s config >> ~/.kube/config        

You can learn more from https://microk8s.io/docs/working-with-kubectl


Connect your Kubernetes cluster to Azure Arc

In this section, we will connect our local Kubernetes cluster to Azure Arc using Azure CLI with connectedk8s extension.

This will create a Kubernetes - Azure Arc resource which be a representations of your cluster from Azure perspective.

Install and Azure CLI

You should install Azure CLI on a linux/amd64 machine as it is a requirement of the connectedk8s extension (which relies on Helm).

If you really want to run it from a Raspberry Pi, you will have to replace helm binary in azure cli package with its arm64 version. Obviously, other issues might occur then...

Installing Azure CLI is quite easy. You can simply run:

$ curl -L https://aka.ms/InstallAzureCli | bash

$ source $HOME/.bashrc        

Note: Others Azure CLI installation procedure are available here: https://aka.ms/azure-cli

Configure Azure CLI to connect Kubernetes to Azure Arc

Azure CLI requires the connectedk8s extension to enable the proper commands:

$ az extension add --name connectedk8s        

This new extension will interact with a set of new resource providers. Thus, you need to register them from your Azure CLI setup, prior to use the connectedk8s command:

$ az provider register --namespace Microsoft.Kubernetes
$ az provider register --namespace Microsoft.KubernetesConfiguration
$ az provider register --namespace Microsoft.ExtendedLocations        

This can take a few minutes for this process to complete.

You can monitor registration process until the RegistrationState is marked as Registered for each provider.

$ az provider show -n Microsoft.Kubernetes -o table
$ az provider show -n Microsoft.KubernetesConfiguration -o table
$ az provider show -n Microsoft.ExtendedLocation -o table        

Ensure your Kubernetes cluster is ready to connect

Kubeconfig file

Azure CLI will rely on your kubeconfig file to interact with your cluster.

If you are using Microk8s, this configuration is packaged in the microk8s command. So you need to export it to the host and use it as a default context:

$ microk8s config >> $HOME/.kube/config
$ kubectl config get-contexts
$ kubectl config use-context microk8s        

Network Connectivity

A set of Azure Arc Agents will be deployed as Pods within your Kubernetes cluster. Those pods will sit in a dedicated Kubernetes namespace called 'azure-arc'.

Thus, a series of connections will be established between your cluster and Azure:

  • pull container images of Azure Arc agents from mcr.microsoft.com
  • fetch authentication ARM tokens from login.microsoftonline.com

If you have to configure your firewall for outbound connections, the full list of endpoints which will be used is available Azure Arc documentation

For those connections to complete, you also need expose the following ports from your Internet gateway to one of your Kubernetes nodes:

  • TCP port 443 for HTTPS
  • TCP port 9418 for git

If your cluster is behind a proxy server, Azure Arc agents will not be able to reach

Azure endpoints. In this case, you can setup a proxy with Azure CLI, as explained on Azure Arc documentation

DNS resolution

For the Azure Arc agents to resolve their public endpoints, a DNS service must be available within the cluster. For Microk8s, you can easily setup one with:

$ microk8s enable dns        


Connect your Kubernetes cluster to Azure Arc

Now that all the prerequisites are checked, we can go straight to our objective: Connect our cluster to Azure Arc!

First, you have to be logged in:

$ az login        

If you have multiple subscriptions attached to your tenant, don't forget to select which subscription you want to use

$ az account list
$ az account set --subscription <SUBSCRIPTION-ID>        

Create a resource group:

$ az group create --name test-kube-arc --location westeurope        

Create Kubernetes - Azure Arc resource and connect your cluster to it:

$ az connectedk8s connect \
 --name pizza \
 --resource-group test-kube-arc \
 --location westeurope \
 --tags type=microk8s location=HomeOven        

This command takes several minutes to complete, depending on your network bandwidth as it will pull several container images from Microsoft registries.

Once completed, you can see your cluster on Azure:

$ az connectedk8s show --name pizza --resource-group test-kube-arc        

In case of timeout troubles...

The Kubernetes - Azure Arc resource on Azure side will wait for the pods to be ready, and there is a timeout (about 20min). If for some reason pods are not ready yet when timeout occurs, you will have to:

  1. Clean up resources on your Kubernetes cluster

$ helm list -A
$ helm uninstall azure-arc --debug        

If the previous fails, you "force it" running:

$ kubectl delete all --all -n azure-arc
$ helm uninstall azure-arc --debug --no-hooks        

2. Clean up resources on Azure

$ az connectedk8s delete \
 --name pizza \
 --resource-group test-kube-arc        

3. Relaunch the az connectedk8s connect ... command

Already downloaded container images will remain on the host and should not be downloaded again, giving you a better chance to complete the process in the given time range.


Visualize your Kubernetes resources from Azure Portal

Azure Arc-enabled Kubernetes allows to control external clusters like any other internal Azure resource.

For example, from Azure Portal, you can visualize workloads running in your local cluster:

No alt text provided for this image

To enable this, you need to create a serviceaccount with proper role on your cluster:

$ kubectl create serviceaccount admin-user

$ kubectl create clusterrolebinding admin-user-binding \
 --clusterrole cluster-admin \
 --serviceaccount default:admin-user

$ SECRET_NAME=$(kubectl get serviceaccount admin-user -o jsonpath='{$.secrets[0].name}')

$ TOKEN=$(kubectl get secret ${SECRET_NAME} -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')

$ echo $TOKEN        

From Azure Portal, navigate to your Kubernetes - Azure Arc resource and select Workloads on the left panel.

A bearer token is requested. You can provide the token generated previously.


Deploy Log Analytics Agents to get advanced real-time monitoring

From Azure Portal, navigate to your Kubernetes - Azure Arc resource, select Insights on the left panel and activate Log Analytics. You can create a new Log Analytics Workspace, or use an existing one (which is what I prefer, to avoid having many Workspaces out there).

Activating Log Analytics on your cluster will automatically trigger the deployment of OMSAgents on your cluster. Those agents will forward monitoring information and logs to Azure Log Analytics.

After a few minutes, you can use Azure monitor solutions:

No alt text provided for this image

Graph Queries

Finally, another interesting feature is to run graph queries on your cluster and benefit of the KQL query language :

$ az extension add --name resource-graph

$ QUERY="Resources | project name, location, type,
 kubernetes=properties.kubernetesVersion, clusterdistribution=tags.type
 | where type =~ 'microsoft.kubernetes/connectedclusters'"

$ az graph query -q $QUERY
{
 "count": 1,
 "data": [
  {
   "kubernetes": "1.22.2-3+08d70d9ea9965a",
   "location": "westeurope",
   "name": "pizza",
   "clusterdistribution": "microk8s",
   "type": "microsoft.kubernetes/connectedclusters"
  }
 ],
 "skip_token": null,
 "total_records": 1
}        


Conclusion

Azure Arc is a great move toward the setup of real and effective Multi-cloud environments. It enables the governance and monitoring of all resources wherever they are, using well-known first-class management capabilities offered by Azure. Also, it can be an interesting solution for cloud migration projects.


#AzureArc #MSIgnite #Kubernetes #Raspberrypi #MicroK8s #Multicloud #Edge #Geek

Ahh thats a sneaky way (smart too) to get around the container support! Would be awesome to see these compiled to actually work on arm at some point :)

Like
Reply

Nice one! I was missing guide on some differences on MicroK8s and K8S, and this guide provided those missing steps

Well done, Brian. Do you think that Cloud Act can be used to gain access to your Rasberry PI ? :)

To view or add a comment, sign in

Others also viewed

Explore content categories