Using Helm Charts Without Tiller

Deploying the Confluent platform Helm chart without Tiller

Jeroen Rosenberg

--

Say, you want to deploy Kafka to your company’s Kubernetes cluster. You search online how other people are doing this and quickly stumble upon some existing Helm charts that are doing EXACTLY what you need. Awesome, this is going to save you a lot of time and error-prone manual hassle with YAML configuration. You just need to install and initialise Helm on the cluster and…

… you are stopped by your company’s security officer, because he has serious security concerns about Helm’s server-side component, Tiller. Now what?

In this post I’ll show you how you can still use practically any available Helm chart AND please your security guy by avoiding Tiller. As an example I’ll show how to deploy the Confluent platform using its official Helm chart on GKE. Before we do that let’s first look at what security concerns one might have with regards to Tiller.

The problem with Tiller

There’s a lot being written on potential security threats Tiller might pose (e.g. https://engineering.bitnami.com/articles/helm-security.html). Tiller is a quite intrusive component. It’s well known within the Helm community that especially the default Tiller setup is simply not secure and it’s not really trivial to fix this

  1. There’s no access control out-of-the-box. helm init installs Tiller into the cluster in the kube-system namespace and without any RBAC rules applied. Currently Tiller does not provide a way to map user credentials to specific permissions within Kubernetes. Tiller runs with the default service account for a namespace if no other service account is supplied. As a result all Tiller operations on that server are executed using the Tiller pod’s credentials and permissions.
  2. Tiller is especially problematic in multi-user environments. It’s difficult at best to grant individual user privileges. RBAC policies are not per user, but per Tiller pod. Any constrained user that has access to Tiller, has access to everything Tiller has access too. This likely means you need a dedicated Helm installation per project/team/role. This adds significant complexity.
  3. In the default installation the gRPC endpoint that Tiller offers is available inside the cluster without authentication configuration applied. Without applying authentication, any process in the cluster can use the gRPC endpoint to perform operations inside the cluster. So any pod inside your cluster can ask Tiller to install a chart that installs new ClusterRoles granting the local pod arbitrary privileges
  4. For historical reasons, Tiller defaults to storing its release information in plain text inside ConfigMaps. There’s a feature in beta which allows to override this use kubernetes secrets.

In other articles such as the one mentioned in the top of this paragraph this is all explained much better and more elaborate, but I hope this gives at least an impression of why your security guy might not be overly excited to install Helm.

Helm beyond Tiller

Since the Helm community is fully aware of the concerns around Tiller, it’s been working on alternative solution for quite a while. According to the Helm roadmap Helm 3 is supposed to get rid of Tiller completely. Also there have been attempts to run Helm 2 without Tiller, for instance by using the Tillerless Helm plugin. One could also consider to just use Helm as a templating engine and NOT use it for lifecycle management of your Kubernetes roll outs. The latter is the approach I’m going to explain in the remainder of this post.

Helm as a Templating Engine

As mentioned, Tiller is the server side component of Helm and it’s responsible for release management of the Helm packages. The templating engine, however, relies in the client. So we can use that without Tiller!

  1. Install the Helm client. If you’re using Homebrew
$ brew install kubernetes-helm

2. Initialise Helm in client-only mode

$ helm init --client-only

3. Add the Confluent Helm repository to your local repositories

$ helm repo add confluentinc https://confluentinc.github.io/cp-helm-charts/
$ helm repo update

4. Fetch and untar the Confluent Helm chart

$ helm fetch confluentinc/cp-helm-charts --untar

5. Assuming you are connected to a Kubernetes cluster, create a namespace for your deployment

kubectl create ns confluent

6. Render the desired templates locally and pipe the result to kubectl apply

$ helm template cp-helm-charts --name my-confluent-oss | kubectl -n confluent apply -f -

Using the -x flag you can choose which specific templates to render, e.g.

$ helm template cp-helm-charts -x charts/cp-kafka/templates/statefulset.yaml charts/cp-kafka/templates/headless-service.yaml

You can use the --set flag to override the configuration in values.yaml. For instance, if you would like to disable ksql, kafka-connect and kafka-rest-proxy

$ helm template cp-helm-charts --name my-confluent-oss --set cp-kafka-rest.enabled=false,cp-kafka-connect.enabled=false,cp-ksql-server.enabled=false | kubectl -n confluent apply -f -

If you followed all the steps you should have a fully functional Confluent platform deployed including Kafka and Zookeeper. And you didn’t have to write a single line of YAML!

Conclusion

Fantastic, we’re able to deploy the Confluent platform Helm chart and practically any other Helm chart using the Helm and Kubectl clients without introducing any security threats. Of course we are not able yet to enjoy all of Helm’s features without Tiller, but at least we can avoid haven to manually write templates if there’s already a chart available.

I recently learned about the existence of another interesting tool in this area, called Kustomize. There are articles on how to mix in Kustomize in the approach described above, which might have some additional benefits. I’ll definitely research this and that might lead to another post. Stay tuned!

--

--

Jeroen Rosenberg

Dev of the Ops. Founder of Amsterdam.scala. Passionate about Agile, Continuous Delivery. Proud father of three.