To Helm or Not to Helm !!!

To Helm or Not to Helm !!!

Disclaimer: "The opinions expressed in this publication are those of the authors. They do not reflect the opinions or views of the authors current or previous organization"

Do you own/manage/support OR plan to own/manage/support production setup on a Kubernetes cluster ( KubeAdm (on-premise), AWS, Azure, Google)?

Do your DevOps infrastructure/scripts still deals with low-level Kubernetes semantics ( kubectl, Services, Deployments, Config maps)?

Is it hard for you to find the exact versions, dependencies, and state of the application deployed on the k8 cluster?

Do someone needs to remember the set of pipelines (for micro-service) and the order which needs to be executed to deploy the application on a cluster?

Do you need to change the Kubernetes service deployment scripts every time if there is a security fix?

Do developers and QA find it hard and time-consuming to deploy/rollback services (with correct versions and dependency) for test and development?

If the answer to anyone of the above questions is yes you should read this article.

What is Helm

Helm (https://helm.sh/) is “The package manager for Kubernetes”. 

The question is, why do I need a package manager for my Kubernetes deployed applications. Before answering that question, let us identify, what are the different components of an application that is deployed on Kubernetes cluster.

The below diagram shows a typical microservice-based application deployed on a K8 cluster and its cross-section at a low level.

The application uses a set of microservice to deliver functionality. Each microservice deployment has many parts such as the docker container, configurations, service, secretes, namespace, resource quotas, ingress controller, replica set, pod security policies and other software dependencies (i.e Kafka, etc).

Application decomposition


Deployment of sub components of a microservice requires

-         Placeholder replacement in the definition files

-         Pre-installation validation

-         Execution of kubectl applies commands for each of the artifacts.

-         Rollback and redeploy if something fails

Things became more complicated when you have a huge number of microservices with a different set of infrastructural components. If done using traditional scripting, deployment and management of Kubernetes cluster-based service will become a DevOps nightmare.

Few of the drawbacks of directly using Kubernetes commands to deploy and manage Kubernetes cluster-based service are:

1.      Noncohesive DevOps scripts/code-base: Instead of focusing on the deployments semantics now the DevOps scripts needs to know the nitty-gritty of the services which it is deploying. This is the incorrect level of abstraction and should be avoided. This will result in complex deployment scripts which will be hard to debug and manage.

2.  Unnecessary templatization coding effort: For deployment of microservice, the deployment parameter needs to be replaced in the deployment files. Writing custom code/script for achieving templatization is an error-prone and resource-intensive process.

3. Hidden dependency and application semantics: Microservice does depend on other microservices/components for the operations. If plain kubectl commands are used for deployment, this semantics gets hidden/undocumented in the DevOps infrastructure (or in someone’s head, which is worse). 

4.  Dependency Hell: Microservice incapable of stating their exact dependencies will lead to dependency hell problems where at some point it became almost impossible to identify the exact application state in the Kubernetes cluster. With custom tooling, it is almost impossible to specify and codify the exact dependencies (with versions) for a particular microservice. This is a major gap when someone wants to quickly spawn an application environment for a Dev/QA/Test environment. You might have heard this sentence in your scrums/ daily interaction with QA team if you don’t use an explicit verbose mechanism for defining microservice dependencies.

-         What all versions of the microservices are deployed in the Kubernetes cluster?

-         What is the qa-certification status of the application in an environment?

-         What version of a microservices was QA certified with which version of dependent service?

-         What all pipeline and with which build number input do I need to trigger pipelines to push the application into other environments?

-         Do we have exact versions of the microservice on products that we have validated in QA cluster?

-         Which microservice is breaking the data semantics? All Unit tests for all the microservices are green but why I still have functional issues?

5.  Unable to scale: Deployment mechanism using kubectl does not scale well as the number of microservices increases. A new microservice with some different deployment requirements (an additional config map, tls configuration, some new security requirement) will force existing deployment scripts to changes.

6. Artifact/Code duplication:  With the large number of microservices, separate repos and separate teams owning and deploying this microservice, if not mandated it will lead to duplication of the artifacts and the code which changes/updates this artifact. This duplicated deployment mechanism will result in unnecessary software rot which will degrade the overall deployment mechanism over time.

7.  Inconsistent/Missing versioning semantics: The major drawback of custom deployment script is missing versioning semantics at the application layer. A single microservice is composed of configuration and the container. Any moving parts when updated result in a different state of the service this state should be versioned. Those versions should be easily accessible/auditable. These changing versions are buried deep under the DevOps infrastructure and hard to retrieve.

8. Difficult rollback/upgrade: With custom tooling, we need to implement the strategies for upgrading and rolling back the application. Implementing rollback and upgrades implementation by using just the Kubernetes command-line needs a lot of effort and maybe error-prone ( Codingfying the ordering and the mutability becomes a challenge)

Helm the paradigm shift

Helm based deployment requires the following components.

Helm Client: Client-side component to package, list execute the Helm Charts.

Tiller: Server-side component which is responsible for creating the service, pods and other infrastructure components

Chart: Artifact which provides repeatable application installation, and serves as a single point of authority. It contains the recipe of how to create all the Kubernetes infrastructure elements and the associated versions. It contains the fully templatized service and deployment definition files plus file containing default values. It can also contain the definition of ingress controller, config maps, cluster role binding roles etc. 

Helm repository: A Repository is a place where charts can be collected and shared. It's like Perl's CPAN archive or the Fedora Package Database, but for Kubernetes packages [1]

Helm flow can be summarized as below:


No alt text provided for this image


(1) Docker image is pushed into the repository.

(2) Helm package is created and pusher to the Helm repo.

(3) Helm install command is triggered for the deployment.

(4) Helm tiller will take care of the deployment orchestration that needs to be done for installing/upgrading or downgrading the application on Kubernetes Infrastructure. No custom script needs to be written which does the parameter replacement and execution of the artifacts in the cluster.

Quick value realization

As and when you start using Helm for deploying the application on Kubernetes cluster you start reaping these benefits:

  1. Cleaner DevOps integration. No low-level code for deployment/upgrade/downgrade semantics
  2. Crisp versioning semantics. Usage of the separate repo for the artifacts facilitates traceability as well as ease of operation.
  3. More predictable deployment/upgrade and rollback semantics.
  4. Less error-prone DevOps infra because of lesser code and less duplication due to high templatization support provided by Helm.
  5. Uniform and consistent way of injecting properties into the services. Have seen deployment where application-level config/environment variables were distributed across the DevOps infra and increase the overall chaos and unpredictability.

Summary :

Yes, anyone who is planning or having active application deployment in a Kubernetes cluster should use helm. It will oversimplify the interaction of your DevOps mechanism with the Kubernetes cluster and will make it work at the right level of abstraction. It is a clean crisp and error-free mechanism for deployment and management of applications in the Kubernetes cluster.

References :

[1] https://helm.sh/docs/intro/using_helm/

To view or add a comment, sign in

More articles by Amit Meena

Others also viewed

Explore content categories