Charting a Course in Kubernetes with Helm

Devops is a rapidly evolving space. The ways we deploy apps and services today look very different than they did 10 years ago. I’m a full-stack developer that often crosses over into deployment, and I’ve seen the ecosystem supporting production applications dramatically shift over a very short period of time. The dust is finally beginning to settle around Docker and containerized architectures, and rich tools are growing around this new solution.

While tools like docker-compose are great for bootstrapping groups of resources, they don’t do a lot to help you manage them after they are running. A new generation of “IT orchestration” tools are springing up around container-based architectures to help make them easier to manage and maintain. My favorite of these so far is Kubernetes.

Google created Kubernetes to facilitate a major shift in the way they think about their deployments. Typically we build architectures around the hosts that support our applications. With Kubernetes, Google wanted to make it possible to take a container-centric view, abstracting away the actual nodes as much as possible.

Container-centric

What does this mean? You focus your efforts on the container formation itself, rather than spending a lot of time configuring the virtual machines that host them. It manages a lot of common IT tasks for you— like auto-scaling, load balancing, securely storing and accessing secret keys, mounting persistent storage, service discovery, rolling no-downtime updates, monitoring, logging, and more! Kubernetes automatically balances workload across available VM nodes, and the construct of “Pods” allows you to co-locate resources that need to work closely together.

It’s a really powerful combination that makes a lot of progress towards combining the ease of a hosted platform like Heroku and the flexibility of more traditional cloud providers. In my opinion, however, the real power is in the portability of the platform. You can host it just about anywhere — from the ubiquitous AWS to your own physical rack servers. This guards against lock-in, allowing you to move to different hosting platforms as needed.

A helmsman to steer your ship

There’s still a big piece missing in the Kubernetes experience, however — bootstrapping deployments on your cluster. Setting up a Kubernetes deployment manually can be time consuming and error-prone. Thankfully, the folks at Deis created a great open-source bootstrapping tool called Helm, which allows you to describe your container formation using yaml and Go templates. It’s a little like “Chef for Kubernetes”, allowing you to configure your formation in a dynamic and repeatable way with templates.

The project is still early in development and is changing quickly. Deis maintains their original version of Helm, which they’ve now dubbed “Helm Classic”. The Kubernetes team maintains a new version that is rapidly evolving — but it’s already a great way to bootstrap your backend services in a very flexible, portable, and repeatable way. You create a values file with overridable variables, and then describe each resource you want to create dynamically using templates based on the values provided. This allows you to re-use and share your configurations, and override values files to customize them for different projects.

Charts to guide your way

Packages for Helm are called Charts. The official chart repository is just getting started, but as you’re building your own charts you can look at the Helm Classic charts repository for inspiration. I’ve started building up a chart repository for Ecliptic, and I’m hoping it will grow quickly and we can contribute charts to the official repository soon!

To get started writing a chart, create a new folder an start a Chart.yaml file. This is the package metadata for your chart.

name: node
version: 1.0.0
description: Chart for derivatives of the official Node images
keywords:
- node
- application
home: https://nodejs.org/en/
sources:
- https://github.com/kubernetes/charts
- https://github.com/docker-library/node
maintainers:
- name: Brandon Konkle
  email: brandon@ecliptic.io
engine: gotp

Next you’ll want to create a default values.yaml. This is a properties file that you can override on the command line, allowing you to customize the chart for each project.

imageName: "node"
imageTag: "boron"
cpu: 100m
memory: 256Mi
port: 3000
replicas: 1

Now you’re ready to define some resources!

Enlisting some help

The process of configuring resources is rather straightforward, but it can be extremely helpful to adopt some consistent practices across your cluster so that it’s easy to see what’s happening on your dashboard at a glance. In the official chart repository, the team uses some quick helpers to get consistent naming across their resources. These are written in Go template syntax, so if you’re not familiar you should definitely read up.

{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 24 -}}
{{- end -}}

{{/*
Create a default fully qualified app name.
We truncate at 24 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 24 -}}
{{- end -}}

These helpers go in a templates folder in your chart, with the rest of your resource templates.

Constructing your vessel

Now you can define your resources. For my simple Node app, I’m using a Replication Controller that will keep a set amount of containers active running my app, which creates a “pod”.

apiVersion: v1
kind: ReplicationController
metadata:
  name: {{ template "fullname" . }}
  labels:
    app: {{ template "fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
spec:
  replicas: {{ .Values.replicas }}
  selector:
    app: {{ template "fullname" . }}
  template:
    metadata:
      name: {{ template "fullname" . }}
      labels:
        app: {{ template "fullname" . }}
    spec:
      containers:
      - name: {{ template "fullname" . }}
        image: "{{ .Values.imageName }}:{{ .Values.imageTag }}"
        imagePullPolicy: Always
        resources:
          requests:
            cpu: "{{ .Values.cpu }}"
            memory: "{{ .Values.memory }}"
          limits:
            cpu: "{{ .Values.cpu }}"
            memory: "{{ .Values.memory }}"
        env:
          - name: PORT
            value: "{{ .Values.port }}"
        {{- range .Values.env }}
          - name: {{ .name }}
            value: {{ .value }}
        {{- end }}
        ports:
        - name: node
          containerPort: 3000
        readinessProbe:
          tcpSocket:
            port: 3000
          initialDelaySeconds: 30
          timeoutSeconds: 1
        livenessProbe:
          tcpSocket:
            port: 3000
          initialDelaySeconds: 30
          timeoutSeconds: 1
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      {{- if .Values.imagePullSecrets }}
      imagePullSecrets:
      {{- range .Values.imagePullSecrets }}
        - name: {{ .name }}
      {{- end }}
      {{- end -}}

Because the official Node container image is largely env-driven, I’ve made it very easy to customize environment variables using the values.yaml file. I’ve also made it possible to add a reference to an imagePullSecret so that I can pull a private image. The values imageName and imageTag allow me to point to my own derivative of the official Node image.

To route traffic to the pod, I use a Load Balancer service.

apiVersion: v1
kind: Service
metadata:
  name: {{ template "fullname" . }}
  labels:
    app: {{ template "fullname" . }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
spec:
  type: LoadBalancer
  ports:
  - name: node
    port: {{ .Values.port }}
    targetPort: node
  selector:
    app: {{ template "fullname" . }}

The targetPort points to the named port “node” on the target resource, which is the pod that my Replication Controller manages. This exposes that internal port to the value for “port” in my values.yaml file.

Launching your expedition

Now you’re ready to deploy a “release”, which is what Helm calls an active deployment. First, you need to set up Helm on your cluster, however. Helm needs to install a maintenance service called “Tiller” to execute deployments on your cluster. To install it, make sure you’re using the desired kubectl context and run:

$ helm init

If all is well, it should report that the Tiller was installed successfully. If not, check out the quickstart guide for more detail.

Now you can deploy! I use a simple naming convention for releases — the app name and a tag indicating the function of the release. For example,myproject-app and myproject-db would be the release names for my app deployment and database deployment, respectively.

$ helm install --name myproject-app .

This connects to your Kubernetes cluster and begins spinning up your container formation. Awesome!

If you need to deploy an update to your app, you simply push an updated Docker image and then execute a rolling update. This tells the replication controller to execute a zero-downtime deploy by updating your containers one-by-one and not killing old ones until the new ones are ready.

$ kubectl rolling-update myproject-app-node

The resource name has “-node” appended because the helpers we created above use the chart name.

Happy sailing!

Helm is just beginning, and it’s still rough around the edges at the moment. With that in mind, however, it can be an excellent companion if you’re trying to achieve reusable, composable builds for your Kubernetes-based projects.

Let me know what your experience with Helm is, or if you have a favorite alternative!