Secure AKS Deployments using GitHub Action : Leveraging OpenID Connect (OIDC) and Workload Identity
This document outlines the process of Secure Application Deployment into a Private AKS cluster through GitHub Actions and OIDC-based Federated Credential authentication.
Why use workload identity federation?
Workload identity federation can be used to configure a user-assigned managed identity or app registration in Microsoft Entra ID (Formerly Active Directory) to trust tokens from an external identity provider (IdP), such as GitHub.
Your software workload (in this example - Application Pod) uses that access token to access the Microsoft Entra protected resources to which the workload has been granted access. You eliminate the maintenance burden of manually managing credentials and eliminates the risk of leaking secrets or secret expire.
In this example, I have created two types of federated credentials:
1. Federated Credential for GitHub Actions for Azure Authentication
To establish a secure authentication mechanism between GitHub and Azure - a federated identity credential needs to be created that allows GitHub Actions (using Microsoft EntraID Application) to authenticate to Azure without storing secrets.
[Note: By default, Azure access tokens issued during OIDC based login could have limited validity. Azure access token issued by Service Principal is expected to have an expiration of 1 hour by default. And with Managed Identities, it would be 24 hours]
2. Federated Credential for AKS Workload Identity:
Microsoft Entra Workload ID uses Service Account Token Volume Projection (that is a k8 service account), to enable pods to use a Kubernetes identity. A Kubernetes token is issued and OIDC federation enables Kubernetes applications to access Azure resources securely with Microsoft Entra ID, based on annotated service accounts.
TL;DR: you get a JWT token from Kubernetes, you then pass that JWT Token to Azure Entra ID and it gives you a new JWT token that is backed by an Azure Entra Identity.
Let’s see the steps to implement above solution
Prepare the pre-requisites for your GitHub Action workflow
[Note: In this example, I’ve used an App registration for both GitHub authentication and Workload Identity authentication. For Workload identity Authentication, you can use a managed identity as well]
1. Assign AKS related permissions to the App registration
# Retrieve the App ID (Service Principal):
$SPobjectid = az ad app list --display-name "<APP-NAME>" --query "[0].appId" -o tsv
# Get the ResourceId of your AKS cluster
$aksResourceId=(az aks show --name $AKScluster --resource-group $aksrg --query id -o tsv)
# Assign the role to your service principal - Reader – Subscription; Cluster admin – AKS and ACR Pull & Push
az role assignment create --assignee $SPobjectid --role "Reader" --scope /subscriptions/$SubscriptionID/
az role assignment create --assignee $SPobjectid --role "Azure Kubernetes Service Cluster Admin Role " --scope $aksResourceId
az role assignment create --assignee <APP-ID> --role "AcrPull" --scope /subscriptions/<SUBSCRIPTION-ID>/resourceGroups/<RESOURCE-GROUP>/providers/Microsoft.ContainerRegistry/registries/<ACR-NAME>
az role assignment create --assignee <APP-ID> --role "AcrPush" --scope /subscriptions/<SUBSCRIPTION-ID>/resourceGroups/<RESOURCE-GROUP>/providers/Microsoft.ContainerRegistry/registries/<ACR-NAME>
[Note: you can assign “Azure Kubernetes Service Contributor” if you want to limit the scope]
2. Create federated credentials for Github Action Authentication
Navigate to the Azure portal and open your App registration, select "Certificates & secrets," and then "Federated credentials." Here, click on "+ Add credential."
3. Add the secrets and variables into GitHub
Open repo Settings -> Secrets and variables -> Actions.
· CLIENT_ID
· TENANT_ID
· SUBSCRIPTION_ID
· ACR
· AKS_RG
· AKS_CLUSTER
4. Create an OIDC discovery endpoint for your Azure Kubernetes Service (AKS) cluster
az feature register --namespace "Microsoft.ContainerService" --name "EnableWorkloadIdentityPreview"
Recommended by LinkedIn
You can update an AKS cluster using the az aks update command with the --enable-oidc-issuer and the --enable-workload-identity parameter to use the OIDC Issuer and enable workload identity. The following example updates a cluster named $AKScluster
az aks update -g $AKSrg -n $AKScluster --enable-oidc-issuer --enable-workload-identity
To get the OIDC Issuer URL and save it to an environmental variable, run the following command.
$AKS_OIDC_ISSUER=(az aks show -n $AKScluster -g $AKSrg --query "oidcIssuerProfile.issuerUrl" -otsv)
[Note:When we’re deploying using a manifest, this ServiceAccount will be used for the PODs within a specific namespace. This way you can restrict your workload access within AKS.
Also, I’ll map the App registration into the same ServiceAccount which will be used for Workload Identity authentication.]
Ensure your Kubernetes Service Account is annotated with the azure.workload.identity/client-id corresponding to the Azure Managed Identity's client ID. This annotation establishes the link between the Kubernetes SA and the Azure Managed Identity.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
azure.workload.identity/client-id: "$SPobjectid "
name: workload-identity-sa
namespace: my-namespace
Use the az identity federated-credential create command to create the federated identity credential between the managed identity, the service account issuer, and the subject.
az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "$AKSrg --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"${SERVICE_ACCOUNT_NAMESPACE}":"${SERVICE_ACCOUNT_NAME}" --audience api://AzureADTokenExchange
The same step can be performed from portal as well:
5. Create a Kubernetes Role or ClusterRole & ClusterRoleBinding
Role: Defines permissions within a specific namespace.
ClusterRole: Defines cluster-wide permissions.
When to Use:
Use a Role if access is needed only within a specific namespace (e.g., limited to certain workloads).
Use a ClusterRole if access is needed across all namespaces (e.g., accessing secrets from different namespaces or cluster-wide operations).
Using this method you can restrict your ServiceAccount access inside kubernetes.
For example, to create a ClusterRole that grants write access to all resources, you can define the following YAML:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-admin
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["get", "list", "create"]
Create Kubernetes ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: sp-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ID of Service Principal
Now that all the Security and Pipeline related pre-requisites are in place let’s see the GitHub Action pipeline
6. GitHub Action workflow
name: Build and Push Docker Image
on:
push:
branches:
- main
jobs:
build-and-push:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- name: Checkout Code
uses: actions/checkout@v4
- name: Login to Azure using OIDC
uses: azure/login@v1
with:
client-id: ${{ secrets.CLIENT_ID }}
tenant-id: ${{ secrets.TENANT_ID }}
subscription-id: ${{ secrets.SUBSCRIPTION_ID }}
allow-no-subscriptions: true
- name: Login to ACR
run: |
az acr login --name ${{ secrets.ACR }}
- name: Build and Tag Docker Image
run: |
docker build -t ${{ secrets.ACR }}.azurecr.io/my-app:${{ github.sha }} .
- name: Push Image to ACR
run: |
docker push ${{ secrets.ACR }}.azurecr.io/my-app:${{ github.sha }}
name: AKS Deployment
on:
push:
branches:
- main
jobs:
AKS-Deployment:
runs-on: ubuntu-latest
permissions:
id-token: write # Required for OIDC authentication
contents: read
steps:
- name: 'Az CLI Login'
uses: azure/login@v1
with:
client-id: ${{ secrets.CLIENT_ID }}
tenant-id: ${{ secrets.TENANT_ID }}
subscription-id: ${{ secrets.SUBSCRIPTION_ID }}
- name: 'Run basic AZ CLI command'
run: az group list
- name: Setup kubectl
uses: azure/setup-kubectl@v3
- name: Setup kubelogin
uses: azure/use-kubelogin@v1
with:
kubelogin-version: 'v0.0.26'
- name: Set AKS context
id: set-context
uses: azure/aks-set-context@v3
with:
resource-group: ${{env.AKS_RG }}
cluster-name: ${{env.AKS_CLUSTER }}
admin: 'false'
use-kubelogin: 'true'
- uses: Azure/k8s-deploy@v5
with:
resource-group: ${{env.AKS_RG }}
name: ${{env.AKS_CLUSTER }}
action: deploy
strategy: basic
private-cluster: true
manifests: |
manifests/backend-deployment.yaml
manifests/backend-service.yaml
manifests/frontend-deployment.yaml
manifests/frontend-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
azure.workload.identity/use: "true"
spec:
serviceAccountName: workload-identity-sa
containers:
- name: my-container
image: <ACR-NAME>.azurecr.io/my-app:${{ github.sha }}
[Note: For applications using workload identity, it's required to add the label azure.workload.identity/use: "true" to the pod spec for AKS to move workload identity to a Fail Close scenario to provide a consistent and reliable behavior for pods that need to use workload identity. Otherwise the pods fail after they are restarted.]
As Workload Identity is enabled in AKS. This allows your AKS workloads to pull images (on behalf of ServiceAccount) from ACR without using a secret-based authentication method.
If I am creating AKS Federated credential for my service principal, do I still need to assign role/clusterrole & binding within AKS?
Yes, even after creating a federated credential for your service principal in Azure Kubernetes Service (AKS), you still need to assign appropriate Kubernetes Role and RoleBinding (or ClusterRole and ClusterRoleBinding) within the AKS cluster. The federated credential facilitates the authentication of your service principal with Azure Active Directory (Entra ID), allowing it to acquire tokens and authenticate against Azure resources. However, Kubernetes operates its own Role-Based Access Control (RBAC) system to manage permissions within the cluster.
What if my application (service principal) has “AKS cluster admin role”, do I still need to create "Federated Credential for AKS Workload Identity?
Assigning your application (service principal) the "Azure Kubernetes Service Cluster Admin Role" grants it administrative privileges over the AKS cluster, enabling it to perform management operations such as deploying manifests. However, if your workloads (pods) running within the AKS cluster need to access other Azure resources (e.g., Azure Key Vault, ACR, Storage Accounts etc.), you still need to configure Azure Entra ID Workload Identity.
"Azure Entra ID Workload Identity" allows pods to authenticate to Azure Entra and access Azure resources without managing secrets. This involves creating a federated credential that links a managed identity to the Kubernetes service account used by your pods. This setup enables the pods to acquire tokens from Azure Entra, facilitating secure access to the required resources.