(Dec. 19 2019) – Deploying an application on Kubernetes can require a number of related deployment artifacts or spec files: Deployment, Service, PVCs, ConfigMaps, Service Account — to name just a few. Managing all of these resources and relating them to deployed apps can be challenging, especially when it comes to tracking changes and updates to the deployed application (actual state) and its original source (authorized or desired state). Versions of the application are locked up in the Kubernetes platform, making them totally decoupled to the versions of the specs themselves (which are typically tracked in external source code management repos).
Additionally, static specs aren’t typically reusable outside a given domain, environment or cloud provider but involve a significant time investment to author and debug. Tooling can be used to provide string replacement based on matching expressions but this kind of automation also needs to be authored or customized to perform the tasks we require and may be prone to errors.
Helm solves these problems by packaging related Kubernetes specs into one simple deployment artifact (called a chart) that can be parameterized for maximum flexibility. In addition, Helm enables users to customize app packages at runtime in much the same way that the helm of a ship enables a pilot to steer (hence the name). If you are familiar with OS package managers such as apt or yum and packages such as deb or rpm, then the concepts of Helm and Helm Charts should feel familiar.
Helm v2 had two parts: The Helm client (helm) and the Helm server (Tiller). Even though Tiller played an important role in managing and tracking Helm chart releases, its interaction with Kubernetes RBAC was difficult to manage. Helm v3 has removed the Tiller server, radically simplifying Helm’s security model, while still maintaining the ability to track chart releases.
This blog is a tutorial that will take you from basic Helm concepts in an example deployment of a chart to modifying charts to fit your needs; in the example, we will add a network policy to the chart.
Prerequisites
Helm uses Kubernetes; you will need a Kubernetes cluster running somewhere, a local Docker client, and a pre-configured kubectl client and config to your K8s cluster. Helm will be using your kubectl context to deploy Kubernetes resources on the configured cluster. The cluster should be using an SDN that understands Kubernetes network policies, like Calico, which you can install from the Installing Calico on Kubernetes guide.
Helm’s default installation is insecure, so if you’re trying Helm for the first time, doing so on a cluster where you won’t adversely affect your friends and colleagues is best. A blog about how to secure Helm this is not.
Installing Helm
The helm client can be installed from source or pre-built binary releases, via Snap on Linux, Homebrew on macOS or Chocolatey on Windows. But the Helm GitHub repo also holds an installer shell script that will automatically grab the latest version of the helm client and install it locally. The examples here use an Ubuntu 16.04 instance where Kubernetes was installed locally using kubeadm.
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7236 100 7236 0 0 29435 0 --:--:-- --:--:-- --:--:-- 29534
$
Make the script executable and run it to download and install the latest version of helm:
$ chmod 700 get_helm.sh
$ ./get_helm.sh
Downloading https://get.helm.sh/helm-v3.0.1-linux-amd64.tar.gz
Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm
$
We can use the version
command to make sure the client is working:
$ helm version -c
version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}
$
We are ready to start using Helm!
2. Explore charts
If you recall, a chart is a collection of spec files that define a set of Kubernetes resources (like Services, Deployments, etc.). Charts typically include all of the resources that you would need to deploy an application as templates. The chart resource templates enable a user to customize the way the rendered resources are deployed at install time by providing values for some (or all) of the variables defined in the templates. Charts also include default values for all of the defined variables, making it easy to deploy the chart with little (or no) customization required.
Helm v2 was preconfigured to talk to the official Kubernetes charts repository on GitHub. This repository (named “stable” below), contains a number of carefully curated and maintained charts for common software like elasticsearch, influxdb, mariadb, nginx, prometheus, redis, and many others. In v3 we have to add it first:
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
"stable" has been added to your repositories
$
List your helm repos to show what has been configured:
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
$
Other repos can be added at any time with the helm repo add
command. To get us started we’ll use the stable repo.
As with other package managers, we want to get the latest list of, and updates to, charts from our configured repos using the update
command:
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
$
The helm search
command will show us all of the available charts in the official repository (since it is the only repo configured and updated):
$ helm search repo stable
NAME CHART VERSION APP VERSION DESCRIPTION
stable/acs-engine-autoscaler 2.2.2 2.1.1 DEPRECATED Scales worker nodes within agent pools
stable/aerospike 0.3.1 v4.5.0.5 A Helm chart for Aerospike in Kubernetes
stable/airflow 5.2.4 1.10.4 Airflow is a platform to programmatically autho...
stable/ambassador 5.1.0 0.85.0 A Helm chart for Datawire Ambassador
stable/anchore-engine 1.3.7 0.5.2 Anchore container analysis and policy evaluatio...
stable/apm-server 2.1.5 7.0.0 The server receives data from the Elastic APM a...
stable/ark 4.2.2 0.10.2 DEPRECATED A Helm chart for ark
stable/artifactory 7.3.1 6.1.0 DEPRECATED Universal Repository Manager support...
stable/artifactory-ha 0.4.1 6.2.0 DEPRECATED Universal Repository Manager support...
stable/atlantis 3.9.0 v0.8.2 A Helm chart for Atlantis https://www.runatlant...
stable/auditbeat 1.1.0 6.7.0 A lightweight shipper to audit the activities o...
stable/aws-cluster-autoscaler 0.3.3 Scales worker nodes within autoscaling groups.
stable/aws-iam-authenticator 0.1.2 1.0 A Helm chart for aws-iam-authenticator
stable/bitcoind 0.2.2 0.17.1 Bitcoin is an innovative payment network and a ...
stable/bookstack 1.1.2 0.27.4-1 BookStack is a simple, self-hosted, easy-to-use...
stable/buildkite 0.2.4 3 DEPRECATED Agent for Buildkite
stable/burrow 1.5.2 0.29.0 Burrow is a permissionable smart contract machine
stable/centrifugo 3.1.1 2.1.0 Centrifugo is a real-time messaging server.
stable/cerebro 1.3.1 0.8.5 A Helm chart for Cerebro - a web admin tool tha...
stable/cert-manager v0.6.7 v0.6.2 A Helm chart for cert-manager
stable/chaoskube 3.1.3 0.14.0 Chaoskube periodically kills random pods in you...
stable/chartmuseum 2.5.0 0.10.0 Host your own Helm Chart Repository
stable/chronograf 1.1.0 1.7.12 Open-source web application written in Go and R...
stable/clamav 1.0.4 1.6 An Open-Source antivirus engine for detecting t...
stable/cloudserver 1.0.4 8.1.5 An open-source Node.js implementation of the Am...
stable/cluster-autoscaler 6.2.0 1.14.6 Scales worker nodes within autoscaling groups.
stable/cluster-overprovisioner 0.2.6 1.0 Installs the a deployment that overprovisions t...
stable/cockroachdb 3.0.1 19.2.1 CockroachDB is a scalable, survivable, strongly...
stable/collabora-code 1.0.5 4.0.3.1 A Helm chart for Collabora Office - CODE-Edition
...
Note the use of stable/
prepending all of the available charts. In the helm/charts project, the stable folder contains all of the charts that have gone through a rigorous promotion process and meet certain technical requirements. Incubator charts are also available but are still being improved until they meet these criteria. You can add the incubator repository (like any other repo) using the helm repo add
command and pointing it at the correct URL, just like we did with the stable repo.
Also note the CHART VERSION and APP VERSION columns; the former is the version of the Helm chart and must follow SemVer 2 format per the rules of the Helm project. The latter is the version of the actual software and is freeform in Helm but tied to the software’s release rules.
With no filter, helm repo search
shows you all of the available charts. You can narrow down your results by searching with a filter:
$ helm search repo ingress
NAME CHART VERSION APP VERSION DESCRIPTION
stable/gce-ingress 1.2.0 1.4.0 A GCE Ingress Controller
stable/ingressmonitorcontroller 1.0.48 1.0.47 IngressMonitorController chart that runs on kub...
stable/nginx-ingress 1.26.2 0.26.1 An nginx Ingress controller that uses ConfigMap...
stable/contour 0.2.0 v0.15.0 Contour Ingress controller for Kubernetes
stable/external-dns 1.8.0 0.5.14 Configure external DNS servers (AWS Route53, Go...
stable/kong 0.27.2 1.3 The Cloud-Native Ingress and Service Mesh for A...
stable/lamp 1.1.2 7 Modular and transparent LAMP stack chart suppor...
stable/nginx-lego 0.3.1 Chart for nginx-ingress-controller and kube-lego
stable/traefik 1.82.3 1.7.19 A Traefik based Kubernetes ingress controller w...
stable/voyager 3.2.4 6.0.0 DEPRECATED Voyager by AppsCode - Secure Ingress...
$
Why is traefik in the list? Because its package description relates it to ingress. We can use helm inspect chart
to see how:
$ helm inspect chart stable/traefik
apiVersion: v1
appVersion: 1.7.19
description: A Traefik based Kubernetes ingress controller with Let's Encrypt support
home: https://traefik.io/
icon: https://docs.traefik.io/img/traefik.logo.png
keywords:
- traefik
- ingress
- acme
- letsencrypt
maintainers:
- email: [email protected]
name: krancour
- email: [email protected]
name: emilevauge
- email: [email protected]
name: dtomcej
- email: [email protected]
name: ldez
name: traefik
sources:
- https://github.com/containous/traefik
- https://github.com/helm/charts/tree/master/stable/traefik
version: 1.82.3
$
The keywords section of the traefik chart includes the keyword “ingress” so it shows up in our search.
Spend a few moments performing some additional keyword searches – see what you come up with!
Deploy a Chart (a.k.a. Installing a Package)
We’ll explore the anatomy of a chart later but to illustrate how easy it is to deploy a chart we can use one from the stable repo. To install the chart, use the helm install
command which only requires one argument: the name of the chart. Let’s start by doing just that, using the containerized Docker registry available from the official helm repo; you can check it out here.
Deploy the registry chart:
$ helm install myreg stable/docker-registry
NAME: myreg
LAST DEPLOYED: Tue Dec 10 06:44:01 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app=docker-registry,release=myreg" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl -n default port-forward $POD_NAME 8080:5000
$
What just happened?
Helm renders the Kubernetes resource templates by injecting the default values for all of the variables then deploys the resources on our Kubernetes cluster by submitting them to the Kubernetes API as static spec files. The act of installing a chart creates a new Helm release object; this release above is named “myreg”.
To view releases, use the list
command (or simply ls
):
$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
myreg default 1 2019-12-10 20:20:41.001947811 +0000 UTC deployed docker-registry-1.8.3 2.7.1
$
A Helm Release is a set of deployed resources based on a chart; each time a chart is installed, it deploys a whole set of Kubernetes resources with its own release name. The unique naming helps us keep track of how the Kubernetes resources are related and lets us deploy the chart any number of times with different customizations.
During installation, Helm will print useful information about when and where resources were created, in this case it also includes notes about how to reach the app. To see it again you can use helm status
with the release name. Let’s use our new registry server; the NOTES
section of the install output has some clues to using it, let’s try it out.
$ export export POD_NAME=$(kubectl get pods --namespace default \
-l "app=docker-registry,release=myreg" \
-o jsonpath="{.items[0].metadata.name}") && echo $POD_NAME
myreg-docker-registry-8d8fc8f5c-5zdk9
$ kubectl -n default port-forward $POD_NAME 8080:5000
Forwarding from 127.0.0.1:8080 -> 5000
Forwarding from [::1]:8080 -> 5000
At this point your terminal should be hijacked for the port-forward. Start a new terminal and use Docker to interact with the registry. From the Docker client on the Kubernetes host, pull a lightweight image like alpine:
$ docker image pull alpine
Using default tag: latest
latest: Pulling from library/alpine
6c40cc604d8e: Pull complete
Digest: sha256:b3dbf31b77fd99d9c08f780ce6f5282aba076d70a513a8be859d8d3a4d0c92b8
Status: Downloaded newer image for alpine:latest
$
Now re-tag it, prepending the image repo name with the IP:Port of our port-forwarded registry and try pushing it:
$ docker image tag alpine 127.0.0.1:8080/myalpine
$ docker image push 127.0.0.1:8080/myalpine
The push refers to repository [127.0.0.1:8080/myalpine]
503e53e365f3: Pushed
latest: digest: sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214 size: 528
$
Verify that the registry has our image by querying the registry API:
$ curl -X GET http://127.0.0.1:8080/v2/_catalog
{"repositories":["myalpine"]}
$
Success!
That was easy but we’re only using the default configuration options for this chart. Likely you will want to customize a chart prior to deployment. To see what options are configurable for a given chart, use helm inspect values
.
Kill your port-forward with Ctrl+C (^C
) and then inspect the chart values:
$ helm inspect values stable/docker-registry
# Default values for docker-registry.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
updateStrategy:
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
podAnnotations: {}
podLabels: {}
image:
repository: registry
tag: 2.7.1
pullPolicy: IfNotPresent
# imagePullSecrets:
# - name: docker
service:
name: registry
type: ClusterIP
# clusterIP:
port: 5000
# nodePort:
annotations: {}
# foo.io/bar: "true"
ingress:
enabled: false
path: /
# Used to create an Ingress record.
hosts:
- chart-example.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
tls:
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
persistence:
accessMode: 'ReadWriteOnce'
enabled: false
size: 10Gi
# storageClass: '-'
# set the type of filesystem to use: filesystem, s3
storage: filesystem
# Set this to name of secret for tls certs
# tlsSecretName: registry.docker.example.com
secrets:
haSharedSecret: ""
htpasswd: ""
# Secrets for Azure
# azure:
# accountName: ""
# accountKey: ""
# container: ""
# Secrets for S3 access and secret keys
# s3:
# accessKey: ""
# secretKey: ""
# Secrets for Swift username and password
# swift:
# username: ""
# password: ""
# Options for s3 storage type:
# s3:
# region: us-east-1
# regionEndpoint: s3.us-east-1.amazonaws.com
# bucket: my-bucket
# encrypt: false
# secure: true
# Options for swift storage type:
# swift:
# authurl: http://swift.example.com/
# container: my-container
configData:
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
securityContext:
enabled: true
runAsUser: 1000
fsGroup: 1000
priorityClassName: ""
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 2
nodeSelector: {}
tolerations: []
$
There are a number of configuration changes we can make, most notably, we can deploy an ingress record for the registry (useful if we have an ingress controller deployed):
ingress: enabled: false path: / # Used to create an Ingress record. hosts: - chart-example.local annotations: # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" labels: {} tls: # Secrets must be manually created in the namespace. # - secretName: chart-example-tls # hosts: # - chart-example.local
we can also configure a number of different storage backends like an s3 bucket and related Secret for storing AWS access keys:
persistence: accessMode: 'ReadWriteOnce' enabled: false size: 10Gi # storageClass: '-' # set the type of filesystem to use: filesystem, s3 storage: filesystem # Set this to name of secret for tls certs # tlsSecretName: registry.docker.example.com secrets: haSharedSecret: "" htpasswd: "" # Secrets for Azure # azure: # accountName: "" # accountKey: "" # container: "" # Secrets for S3 access and secret keys # s3: # accessKey: "" # secretKey: "" # Secrets for Swift username and password # swift: # username: "" # password: "" # Options for s3 storage type: # s3: # region: us-east-1 # regionEndpoint: s3.us-east-1.amazonaws.com # bucket: my-bucket # encrypt: false # secure: true # Options for swift storage type: # swift: # authurl: http://swift.example.com/ # container: my-container
But where are these values coming from?
4. (Partial) Anatomy of a Chart
A chart is a collection of files inside a directory named for the chart. Thus far we have only deployed a chart from a remote repo but if you looked at the link to the docker-registry chart on GitHub, you saw these files. When the chart is installed, Helm downloads the contents of the directory as an archive and caches it locally in the helm client’s workspace directory.
Helm uses the XDG structure for storing files, the default locations are:
Operating System | Cache Path | Configuration Path | Data Path |
---|---|---|---|
Linux | $HOME/.cache/helm | $HOME/.config/helm | $HOME/.local/share/helm |
macOS | $HOME/Library/Caches/helm | $HOME/Library/Preferences/helm | $HOME/Library/helm |
Windows | %TEMP%\helm | %APPDATA%\helm | %APPDATA%\helm |
The cache directory contains local clones of remote chart repositories in archive format:
$ ls -l .cache/helm/repository/
total 7136
-rw-r--r-- 1 ubuntu ubuntu 6492 Dec 10 17:27 docker-registry-1.8.3.tgz
-rw-r--r-- 1 ubuntu ubuntu 7269219 Dec 10 05:21 stable-index.yaml
$
If we want to explore a chart we can expand the archive ourselves, or better yet, use a helm command to do it for us!
Using the pull
command with the --untar
argument results in an unpacked chart on our local system:
$ helm pull stable/docker-registry --untar
$ ls -l docker-registry/
total 24
-rw-r--r-- 1 ubuntu ubuntu 391 Dec 10 19:57 Chart.yaml
-rw-r--r-- 1 ubuntu ubuntu 62 Dec 10 19:57 OWNERS
-rw-r--r-- 1 ubuntu ubuntu 7682 Dec 10 19:57 README.md
drwxr-xr-x 2 ubuntu ubuntu 4096 Dec 10 19:57 templates
-rw-r--r-- 1 ubuntu ubuntu 2676 Dec 10 19:57 values.yaml
$
The Helm docs are pretty good at explaining what most of these are; for now we are going to concentrate on values.yaml. Previously we inspected the values with the helm inspect values
command; looking at the values.yaml we can see exactly what we were shown:
$ cat docker-registry/values.yaml
# Default values for docker-registry.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
updateStrategy:
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
podAnnotations: {}
podLabels: {}
image:
repository: registry
tag: 2.7.1
pullPolicy: IfNotPresent
# imagePullSecrets:
# - name: docker
service:
name: registry
type: ClusterIP
# clusterIP:
port: 5000
# nodePort:
annotations: {}
# foo.io/bar: "true"
ingress:
enabled: false
path: /
# Used to create an Ingress record.
hosts:
- chart-example.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
tls:
# Secrets must be manually created in the namespace.
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
persistence:
accessMode: 'ReadWriteOnce'
enabled: false
size: 10Gi
# storageClass: '-'
# set the type of filesystem to use: filesystem, s3
storage: filesystem
# Set this to name of secret for tls certs
# tlsSecretName: registry.docker.example.com
secrets:
haSharedSecret: ""
htpasswd: ""
# Secrets for Azure
# azure:
# accountName: ""
# accountKey: ""
# container: ""
# Secrets for S3 access and secret keys
# s3:
# accessKey: ""
# secretKey: ""
# Secrets for Swift username and password
# swift:
# username: ""
# password: ""
# Options for s3 storage type:
# s3:
# region: us-east-1
# regionEndpoint: s3.us-east-1.amazonaws.com
# bucket: my-bucket
# encrypt: false
# secure: true
# Options for swift storage type:
# swift:
# authurl: http://swift.example.com/
# container: my-container
configData:
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
securityContext:
enabled: true
runAsUser: 1000
fsGroup: 1000
priorityClassName: ""
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 2
nodeSelector: {}
tolerations: []
$
The values file is where the author(s) of a chart set the default values for all chart variables so all you have to do is type helm install
and the chart should work. Some charts have prerequisites but those are typically documented so you know ahead of time. For example, the WordPress chart declares these prerequisites:
Prerequisites
- Kubernetes 1.12+
- Helm 2.11+ or Helm 3.0-beta3+
- PV provisioner support in the underlying infrastructure
- ReadWriteMany volumes for deployment scaling
Now that we know what can be changed, let’s change something!
Update a Release
When you want to change the configuration of a release, you can use the helm upgrade
command. Helm v2 only used a two-way strategic merge patch; it compared the proposed changes/updates with the chart’s most recent manifest, not actual values of the deployed object. That meant that if a deployed Kubernetes object was edited directly, Helm would ignore those changes. Helm v3, uses a 3-way strategic merge patch; Helm considers the old manifest, the live state, and the new manifest when generating a patch. In both cases Helm will only update things that have changed during the 3-way merge.
Our original Docker Registry Service is ClusterIP type which is why we needed the port-forward:
$ helm inspect values stable/docker-registry |grep -B2 -A4 ClusterIP
service:
name: registry
type: ClusterIP
# clusterIP:
port: 5000
# nodePort:
annotations: {}
$
To confirm it was deployed that way list the Kubernetes Service:
$ kubectl get service myreg-docker-registry
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myreg-docker-registry ClusterIP 10.98.181.158 <none> 5000/TCP 16m
$
Let’s update the Service to use a NodePort so that we can expose the registry to the outside world.
There are two ways to pass configuration data during an update or upon initial install:
--values
(or-f
) – specify a YAML file with overrides- Can be set multiple times with priority given to the last (right-most) file specified
--set
– specify overrides on the command line,- Basic:
--set name=value
==name: value
- Key value pairs are comma separated
- Multiple complex values are supported by set:
--set servers.port=80
becomes:
- Basic:
servers:
port: 80
-
-
--set servers[0].port=80,servers[0].host=example
becomes:
-
servers:
- port: 80
host: example
Though, at this point, the arguments get complex enough to just use yaml (which can more easily be version controlled).
We know that type
is a child of service, so we can set its value with --set service.type=
$ helm upgrade --set service.type=NodePort myreg stable/docker-registry
Release "myreg" has been upgraded. Happy Helming!
NAME: myreg
LAST DEPLOYED: Tue Dec 10 07:10:24 2019
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services myreg-docker-registry)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
$
Based on the output above, Kubernetes has updated the configuration of our Service. The NOTES
section has even changed, indicating that we can now access our docker-registry service via http://NODE_IP:NODE_PORT
We can use helm get values
to see whether that new setting took effect (according to what helm knows).
$ helm get values myreg
service:
type: NodePort
$
Not as much information presented here. Helm only concerns itself with the changes to the yaml key/value pairs.
Retrieve the node port value for the registry service and store it in a environment variable:
$ export export NODE_PORT=$(kubectl get --namespace default \
-o jsonpath="{.spec.ports[0].nodePort}" services myreg-docker-registry) && echo $NODE_PORT
30441
$
From an external client (like a native shell on your laptop)*, Query the docker-registry using your VM’s *public IP for its stored images:
external-client:~$ curl -X GET https://192.168.225.251:31836/v2/_catalog
{"repositories":["myalpine"]}
external-client:~$
Success! Let’s test it by pushing an image.
By default, Docker will only trust a secure remote registry or an insecure registry found on the localhost. Since Kubernetes runs our registry in a container, even when the Docker daemon and the docker-registry are running on the same host the registry is considered “remote”. Our port-forward used localhost so Docker allowed us to push, but won’t let us this time around. Try it:
$ docker image tag alpine $NODE_IP:$NODE_PORT/extalpine
$ docker image push $NODE_IP:$NODE_PORT/extalpine
The push refers to repository [192.168.225.251:31836/extalpine]
Get https://192.168.225.251:31836/v2/: http: server gave HTTP response to HTTPS client
$
There are two ways to address this situation; one way is to configure the registry server to support TLS. Instead of securing the docker-registry we can tell Docker to trust our non-secure registry (only do this in non-production environments). This allows us to use the registry without using SSL certificates.
Doing this next step on the Kubernetes host has a high chance of breaking a kubeadm-deployed cluster because it requires restarting Docker and all the Kubernetes services are running in containers. So use a Docker installation that is external to your Kubernetes host; after all, that is why we exposed the registry as a nodePort service!
Configure our Docker daemon by creating a config file under /etc/docker
that looks like this (replace the example IP with the IP of your node which you stored in NODE_IP earlier):
$ sudo cat /etc/docker/daemon.json
{
"insecure-registries": [
"192.168.225.251/32"
]
}
$
To put those changes into effect, you’ll need to restart Docker:
$ sudo systemctl restart docker
Now, whether your Docker daemon should trust our registry:
$ docker image push 192.168.225.251:31836/extalpine
The push refers to repository [192.168.225.251:31836/extalpine]
503e53e365f3: Pushed
latest: digest: sha256:25b4d910f4b76a63a3b45d0f69a57c34157500faf6087236581eca221c62d214 size: 528
$
See if it worked:
$ curl -X GET http://$NODE_IP:$NODE_PORT/v2/_catalog
{"repositories":["extalpine","myalpine"]}
$
Now you’ve got a registry that your team can share! But everyone else can use it too; let’s put a limit on that.
Network Policies
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints. NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods. Network policies are implemented by a CNI network plugin, so you must use a CNI networking solution which supports NetworkPolicy (like Calico).
By default, pods are non-isolated; they accept traffic from any source. Pods become isolated by having a NetworkPolicy that selects them. Adding a NetworkPolicy to a namespace selecting a particular pod causes that pod to become “isolated”, rejecting any connections that are not explicitly allowed by a NetworkPolicy. Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.
Create a Blocking Network Policy
For our first network policy we’ll create a blanket policy that denies all inbound connections to pods in the default namespace. Create a one that resembles the following policy:
$ cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
Submit it to the Kubernetes API:
$ kubectl apply -f networkpolicy.yaml
networkpolicy.networking.k8s.io/default-deny created
[email protected]:~/svc$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
default-deny <none> 1m
$
This policy selects all pods ({}
) and has no ingress policies (Ingress
) for them. By creating any network policy however, we automatically isolate all pods.
From an external client, query the docker-registry for its stored images:
external-client:~$ curl -X GET http://192.168.225.251:31836/v2/_catalog
curl: (7) Failed to connect to 192.168.225.251 port 31836: Connection timed out
Perfect. The presence of a network policy shuts down our ability to reach the registry externally.
Create a Permissive Network Policy
To enable clients to access our registry pod we will need to create a network policy that selects the pod and allows ingress from a cidr. Network policies use labels to identify pods to target; the registry pod has the labels “app=docker-registry” and “release=myreg” so we can use those to select it. However, these labels are from the current chart release and what we really want is a way to modify the chart with a NetworkPolicy template that uses a parameterized selector and an ingress rule that allows a user to customize an ingress cidr value.
A lot of what we need to author the NetworkPolicy spec file is in the existing templates directory of our chart. Let’s take a look at the local copy of the service.yaml template file as an example:
$ cat docker-registry/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ template "docker-registry.fullname" . }}
labels:
app: {{ template "docker-registry.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- if .Values.service.annotations }}
annotations:
{{ toYaml .Values.service.annotations | indent 4 }}
{{- end }}
spec:
type: {{ .Values.service.type }}
{{- if (and (eq .Values.service.type "ClusterIP") (not (empty .Values.service.clusterIP))) }}
clusterIP: {{ .Values.service.clusterIP }}
{{- end }}
ports:
- port: {{ .Values.service.port }}
protocol: TCP
name: {{ .Values.service.name }}
targetPort: 5000
{{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }}
nodePort: {{ .Values.service.nodePort }}
{{- end }}
selector:
app: {{ template "docker-registry.name" . }}
release: {{ .Release.Name }}
The metadata name and labels sections can be copied verbatim. We want our new policy file to match what is set here (if you examine the other template files they are identical). Many of these variable values are generated automatically by Helm; template
uses a named template defined in a file (_helpers.tpl
in the templates directory) that can be used in other templates. See the helm docs for more info on named templates.
We can also use the Service’s selector to create the matchLabels for our NetworkPolicy spec.
Putting that all together in our new NetworkPolicy template looks like this:
$ cat docker-registry/templates/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: {{ template "docker-registry.fullname" . }}
labels:
app: {{ template "docker-registry.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
podSelector:
matchLabels:
app: {{ template "docker-registry.name" . }}
release: {{ .Release.Name }}
ingress:
- from:
- ipBlock:
cidr: {{ .Values.networkPolicy.cidr }}
Finally, we will add a section to the values.yaml file that lets a user specify a cidr that you can send a curl from, like this:
$ tail docker-registry/values.yaml
fsGroup: 1000
priorityClassName: ""
nodeSelector: {}
tolerations: []
networkPolicy:
cidr: 192.168.225.0/24
If we want to see what Helm does to render our new spec we can use helm template
and pass it the -s
(--show-only
) argument and the path to our new template file so that it only renders the single template:
$ helm template docker-registry/ -s templates/networkpolicy.yaml
---
# Source: docker-registry/templates/networkpolicy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: RELEASE-NAME-docker-registry
labels:
app: docker-registry
chart: docker-registry-1.8.3
release: RELEASE-NAME
heritage: Helm
spec:
podSelector:
matchLabels:
app: docker-registry
release: RELEASE-NAME
ingress:
- from:
- ipBlock:
cidr: 192.168.225.0/24
LGTM!
Run the upgrade command again, this time using the local version of the chart:
$ helm upgrade --set service.type=NodePort myreg docker-registry/
Release "myreg" has been upgraded. Happy Helming!
NAME: myreg
LAST DEPLOYED: Mon Dec 16 03:34:43 2019
NAMESPACE: default
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services myreg-docker-registry)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
Ask Kubernetes what network policies are in place:
$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
default-deny <none> 20m
myreg-docker-registry app=docker-registry,release=myreg 16s
$
From an external client, query the docker-registry once more:
external-client:~$ curl -X GET http://192.168.225.251:31836/v2/_catalog
{"repositories":["extalpine","myalpine"]}
$
Great, we have access once more! Now, try the same query from a test pod running on the Kubernetes cluster:
$ kubectl run client --generator=run-pod/v1 --image busybox:1.27 --command -- tail -f /dev/null
$ kubectl exec -it client -- wget -qO - http://192.168.225.251:31836/v2/_catalog
wget: can't connect to remote host (192.168.225.251): Connection timed out
command terminated with exit code 1
$
What happened?
Our client pod doesn’t belong to the approved cidr so it is not allowed to reach the registry pod. Now external Docker daemons from the given cidr can push and pull images to the registry but pods are unable to talk to it. Our policy and chart release are working as expected. There is a bit more work to do to make this example chart more friendly/reusable to others but that, and all the other steps taken should get you started using charts and augmenting them to fit your needs.
Happy Helming!
This blog was originally posted as a guest blog for Tigera by our Senior consultant Christopher Hanson.