FRSCA: Tekton

Overview

In our blog series about FRSCA we’ve already deployed a hardened Kubernetes cluster with the help of kOps, Trivy, the NSA/CISA Kubernetes Hardening Guide, and the CIS Benchmark, then we added SPIFFE/SPIRE to our cluster for workload identities and Vault for secret management. In this final installment of the series we deploy Tekton and integrate it with SPIRE and Vault to generate container images with signed provenance.

Tekton is a Kubernetes-native open source framework for creating continuous integration and delivery (CI/CD) systems. Using the Kubernetes model of declarative primitives and specifications, adopters can build, test, and deploy across multiple cloud providers or on-premises systems without having to worry about any underlying implementation details.

Some of the benefits to using Tekton include:

  • The ability to define pipelines, the individual tasks undertaken by pipelines, and the parameters consumed by pipelines as code
  • Each task in a pipeline runs inside its own pod, allowing users to allocate just the resources necessary to perform a task; no need for bloated CI/CD servers loaded with (exploitable) tools necessary
  • Like Kubernetes, Tekton is extremely expandable. Tasks can be shared through the Tekton community hub to provide functionality for many use cases

Tekton Pipelines

Tekton itself is a collection of tools. The most basic is the Pipelines tool. There are others, including the Chains tool, which allows Tekton to perform artifact signing with Cosign (among other options). Tekton installs and runs as an extension on a Kubernetes cluster and comprises a set of Kubernetes Custom Resources (CRDs) that define the building blocks used to create and reuse pipelines. 

Once installed, Tekton Pipelines become available via the Kubernetes CLI (kubectl) and via API calls, just like pods and other resources. Tekton also has the tkn command line client, though for the sake of simplicity and to demonstrate just how Kubernetes-native its approach is, you will be shown how to do all of the Tekton operations using the manifests and kubectl.

To install Tekton we can simply tell Kubernetes that our desired state is to have Tekton running. Deploy the Tekton GA release manifests to the cluster:

$ kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml

namespace/tekton-pipelines created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created
role.rbac.authorization.k8s.io/tekton-pipelines-controller created
role.rbac.authorization.k8s.io/tekton-pipelines-webhook created
role.rbac.authorization.k8s.io/tekton-pipelines-leader-election created
role.rbac.authorization.k8s.io/tekton-pipelines-info created
serviceaccount/tekton-pipelines-controller created
serviceaccount/tekton-pipelines-webhook created
serviceaccount/tekton-events-controller created
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created
clusterrolebinding.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook-leaderelection created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-info created
rolebinding.rbac.authorization.k8s.io/tekton-events-controller created
rolebinding.rbac.authorization.k8s.io/tekton-events-controller-leaderelection created
customresourcedefinition.apiextensions.k8s.io/clustertasks.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/customruns.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/pipelines.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/pipelineruns.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/resolutionrequests.resolution.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/taskruns.tekton.dev created
customresourcedefinition.apiextensions.k8s.io/verificationpolicies.tekton.dev created
secret/webhook-certs created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.pipeline.tekton.dev created
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.pipeline.tekton.dev created
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.pipeline.tekton.dev created
clusterrole.rbac.authorization.k8s.io/tekton-aggregate-edit created
clusterrole.rbac.authorization.k8s.io/tekton-aggregate-view created
configmap/config-defaults created
configmap/feature-flags created
configmap/pipelines-info created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-observability created
configmap/config-registry-cert created
configmap/config-spire created
deployment.apps/tekton-pipelines-controller created
service/tekton-pipelines-controller created
deployment.apps/tekton-events-controller created
service/tekton-events-controller created
namespace/tekton-pipelines-resolvers created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-resolvers-resolution-request-updates created
role.rbac.authorization.k8s.io/tekton-pipelines-resolvers-namespace-rbac created
serviceaccount/tekton-pipelines-resolvers created
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-resolvers created
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-resolvers-namespace-rbac created
configmap/bundleresolver-config created
configmap/cluster-resolver-config created
configmap/resolvers-feature-flags created
configmap/config-leader-election created
configmap/config-logging created
configmap/config-observability created
configmap/git-resolver-config created
configmap/hubresolver-config created
deployment.apps/tekton-pipelines-remote-resolvers created
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook created
deployment.apps/tekton-pipelines-webhook created
service/tekton-pipelines-webhook created

$

As you can see from the kubectl output, the Tekton release creates a namespace called tekton-pipelines and then generates many resources, including a host of security primitives (service accounts, RBAC roles and bindings, etc.).

You may also notice that Tekton defines several Custom Resource Definitions (CRDs), in particular:

  • Task – useful for simple workloads such as running a test, a lint, or building a Kaniko cache; a single Task executes a sequence of steps in a single Kubernetes Pod, uses a single disk, and generally keeps things simple
  • Pipeline – useful for complex workloads, such as static analysis, as well as testing, building, and deploying complex projects; pipelines are defined as a series of Tasks

Both Tasks and Pipelines can be executed multiple times. Each instance of a run is known as a TaskRun or a PipelineRun respectively.

List the pods in the tekton-pipelines namespace:

$ kubectl get po -n tekton-pipelines

NAME                                           READY   STATUS    RESTARTS   AGE
tekton-events-controller-77857f9b75-lkvk2      1/1     Running   0          59s
tekton-pipelines-controller-6987c95899-cxhqb   1/1     Running   0          59s
tekton-pipelines-webhook-7d9c8c6f8-xslkv       1/1     Running   0          59s

$

When you see all of the pods report STATUSRunning” and READY1/1“, you are clear to continue.

Establishing the pull-build-containerize-push pipeline

Now that Tekton Pipelines is installed and functional, it is time to assemble the initial Pipeline. It will be simple:

  • Pull the source code from the RX-M Hostinfo public git repo
  • Perform the Docker build using Kaniko
  • Kaniko pushes the built image to an on-cluster registry

To perform these, the user must define several Kubernetes objects:

  • Tekton Tasks that define the tools necessary
  • Tekton Pipelines that organize Tasks into a sequence and define things like common storage and variables
  • Tekton Pipelineruns that feed user-defined parameters into Pipelines and effectively trigger them

Look and see if your cluster has any of these things in it:

$ kubectl get tasks,pipelines,pipelineruns -A

No resources found

$

Nothing. The Tekton Pipelines installation you did earlier is very lightweight.

To begin, you need to add a couple of Tasks. Tekton tasks are available through both the Tekton community hub at https://hub.tekton.dev/ or ArtifactHub at https://artifacthub.io/.

We need two Tasks for this initial pipeline: git-clone and kaniko.

Installing these is as simple as applying their YAML files to the cluster:

$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/git-clone/0.9/git-clone.yaml \
-f https://raw.githubusercontent.com/tektoncd/catalog/main/task/kaniko/0.6/kaniko.yaml

task.tekton.dev/git-clone created
task.tekton.dev/kaniko created

$ kubectl get tasks

NAME        AGE
git-clone   29s
kaniko      29s

$

Some of the major elements of each Task are:

  • The image of the container that will run when the Task is in progress
  • Parameters that can be passed into the application executed by the Task
  • The kinds of workspace (volume) options available to the Task

Now that you have Tasks, it’s time to assemble them into a sequence. This is defined as a Pipeline resource.

Create a specification for a Pipeline as shown below (we will explain the parts after):

$ nano pipeline.yaml && cat $_

apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: clone-build-push
spec:
  description: |
    This pipeline clones a git repo, builds a Docker image with Kaniko and
    pushes it to a registry
  params:
  - name: context-path
    type: string
  - name: dockerfile-path
    type: string
  - name: extra-args
    type: array
  - name: image-reference
    type: string
  - name: repo-url
    type: string
  workspaces:
  - name: shared-data
  tasks:
  - name: fetch-source
    taskRef:
      name: git-clone
    workspaces:
    - name: output
      workspace: shared-data
    params:
    - name: url
      value: $(params.repo-url)
  - name: build-push
    runAfter: ["fetch-source"]
    taskRef:
      name: kaniko
    workspaces:
    - name: source
      workspace: shared-data
    params:
    - name: CONTEXT
      value: $(params.context-path)
    - name: DOCKERFILE
      value: $(params.dockerfile-path)
    - name: EXTRA_ARGS
      value: ["$(params.extra-args[*])"]
    - name: IMAGE
      value: $(params.image-reference)

$

This Pipeline defines the following:

  • The unique identifier of name: clone-build-push
  • Four parameters that will allow users to provide arguments at runtime: context-path, dockerfile-path, extra-args, image-reference, and repo-url
  params:
  - name: context-path
    type: string
  - name: dockerfile-path
    type: string
  - name: extra-args
    type: array
  - name: image-reference
    type: string
  - name: repo-url
    type: string
  • A workspace, which defines a Kubernetes persistent volume (PV) that allows each Task pod to share data
  workspaces:
  - name: shared-data
  • The sequence of Tasks, which references the existing Tasks in the namespace, defines what workspace they use, and how each Task consumes that defined parameters in the pipeline
  tasks:
  - name: fetch-source
    taskRef:
      name: git-clone
    workspaces:
    - name: output
      workspace: shared-data
    params:
    - name: url
      value: $(params.repo-url)
  - name: build-push
    runAfter: ["fetch-source"]
    taskRef:
      name: kaniko
    workspaces:
    - name: source
      workspace: shared-data
    params:
    - name: CONTEXT
      value: $(params.context-path)
    - name: DOCKERFILE
      value: $(params.dockerfile-path)
    - name: EXTRA_ARGS
      value: ["$(params.extra-args[*])"]
    - name: IMAGE
      value: $(params.image-reference)

Once the Pipeline is defined, apply it to the cluster:

$ kubectl apply -f pipeline.yaml

pipeline.tekton.dev/clone-build-push created

$

It is now ready to run!

To run the Pipeline, you need to create a PipelineRun object in the Kubernetes API. This object defines:

  • The run-specific values for the parameters defined in the Pipeline
  • The definition of storage to be used by the run of the Pipeline
  • Other modifications such as security context settings for the Task pods

Create a yaml file for the PipelineRun with the following contents:

$ nano pipelinerun.yaml && cat $_

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  generateName: clone-build-push-run-
spec:
  pipelineRef:
    name: clone-build-push
  taskRunTemplate:
    podTemplate:
      securityContext:
        fsGroup: 65532
  workspaces:
  - name: shared-data
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  params:
  - name: context-path
    value: ./python
  - name: dockerfile-path
    value: ./python/Dockerfile
  - name: extra-args
    value:
    - --insecure=true
  - name: image-reference
    value: reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
  - name: repo-url
    value: https://github.com/RX-M/hostinfo.git

$

As soon as you apply this PipelineRun to the cluster, the Pipeline will trigger.

Before we do that, we need a container registry where we can push the image. To keep things simple and self-contained we will show you how to install a basic, on-cluster registry so you can reproduce this blog without any external dependencies. If you want to use an external registry, you will need to change 2 settings in the PipelineRun:

  - name: extra-args
    value: "--insecure=true"
  - name: image-reference
    value: reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
  • Change the --insecure flag to false if you are using a secure registry (which you should!)
  • Change the FQIN reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton to a valid registry FQIN in the pattern <hostname:port/account/repo:tag>

Deploy the on-cluster registry by adding the following repo to Helm and install the registry with the service port set to 80:

$ helm repo add twuni https://helm.twun.io

$ helm install reg twuni/docker-registry --namespace registry --create-namespace --set=”service.port=80”

NAME: reg
LAST DEPLOYED: Thu Mar 14 04:56:40 2024
NAMESPACE: registry
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace registry -l "app=docker-registry,release=reg" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl -n registry port-forward $POD_NAME 8080:5000

$

This registry uses a ClusterIP service so it is only available within our cluster. This is fine as the Kaniko builder is running there too!

Make sure you use kubectl create to allow the name to automatically generate:

$ kubectl create -f pipelinerun.yaml

pipelinerun.tekton.dev/clone-build-push-run-zw28x created

$

Now watch the pipeline work! A good way is to use the -w switch so you can keep track of status changes on the pods:

$ kubectl get po -w

NAME                                          READY   STATUS     RESTARTS   AGE
affinity-assistant-2793d681c2-0               1/1     Running    0          9s
clone-build-push-run-wtqq7-fetch-source-pod   0/1     Init:1/2   0          8s
clone-build-push-run-wtqq7-fetch-source-pod   0/1     Init:1/2   0          9s
clone-build-push-run-wtqq7-fetch-source-pod   0/1     PodInitializing   0          10s
clone-build-push-run-wtqq7-fetch-source-pod   1/1     Running           0          14s
clone-build-push-run-wtqq7-fetch-source-pod   1/1     Running           0          14s
clone-build-push-run-wtqq7-fetch-source-pod   0/1     Completed         0          17s
clone-build-push-run-wtqq7-fetch-source-pod   0/1     Completed         0          18s
clone-build-push-run-wtqq7-build-push-pod     0/2     Pending           0          0s
clone-build-push-run-wtqq7-build-push-pod     0/2     Pending           0          0s
clone-build-push-run-wtqq7-build-push-pod     0/2     Init:0/3          0          1s
clone-build-push-run-wtqq7-fetch-source-pod   0/1     Completed         0          19s
clone-build-push-run-wtqq7-build-push-pod     0/2     Init:1/3          0          2s
clone-build-push-run-wtqq7-build-push-pod     0/2     Init:2/3          0          3s
clone-build-push-run-wtqq7-build-push-pod     0/2     PodInitializing   0          5s
clone-build-push-run-wtqq7-build-push-pod     2/2     Running           0          10s
clone-build-push-run-wtqq7-build-push-pod     2/2     Running           0          10s
clone-build-push-run-wtqq7-build-push-pod     0/2     Completed         0          2m18s
affinity-assistant-2793d681c2-0               1/1     Terminating       0          2m37s
affinity-assistant-2793d681c2-0               0/1     Terminating       0          2m37s
affinity-assistant-2793d681c2-0               0/1     Terminating       0          2m38s
affinity-assistant-2793d681c2-0               0/1     Terminating       0          2m38s
affinity-assistant-2793d681c2-0               0/1     Terminating       0          2m38s
clone-build-push-run-wtqq7-build-push-pod     0/2     Completed         0          2m19s
clone-build-push-run-wtqq7-build-push-pod     0/2     Completed         0          2m20s

^C

$

To make sure it worked, issue a curl command to your local registry’s /v2/_catalog from a temporary pod:

$ kubectl run -it --rm curl --image rxmllc/tools

/ # curl reg-docker-registry.registry/v2/_catalog

{"repositories":["hostinfo"]}

/ # exit

Session ended, resume using 'kubectl attach curl -c curl -i -t' command when the pod is running
pod "curl" deleted

$

The pipeline worked!

Since the build used pods, you can view the logs on each pod to audit what they may have done. Tekton uses tekton.dev/pipelineTask labels to label its pods by Task. Recall that our Tasks were named fetch-source and build-push; use these labels to get the logs for your pods:

$ kubectl logs -l tekton.dev/pipelineTask=fetch-source

Defaulted container "step-clone" out of: step-clone, prepare (init), place-scripts (init)
+ cd /workspace/output/
+ git rev-parse HEAD
+ RESULT_SHA=d69d7ff7101a093225f6a830753d1d40a928e423
+ EXIT_CODE=0
+ '[' 0 '!=' 0 ]
+ git log -1 '--pretty=%ct'
+ RESULT_COMMITTER_DATE=1706322186
+ printf '%s' 1706322186
+ printf '%s' d69d7ff7101a093225f6a830753d1d40a928e423
+ printf '%s' https://github.com/RX-M/hostinfo.git

$ kubectl logs -l tekton.dev/pipelineTask=build-push

Defaulted container "step-build-and-push" out of: step-build-and-push, step-write-url, prepare (init), place-scripts (init), working-dir-initializer (init)
INFO[0157] args: [-c chown 1000:1000 __main__.py]       
INFO[0157] Running: [/bin/sh -c chown 1000:1000 __main__.py] 
INFO[0157] Taking snapshot of full filesystem...        
INFO[0157] USER 1000                                    
INFO[0157] cmd: USER                                    
INFO[0157] ENV PYTHONUNBUFFERED=1                       
INFO[0157] ENTRYPOINT ["./__main__.py"]                 
INFO[0157] CMD ["9898"]                                 
INFO[0157] Pushing image to reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton 
INFO[0933] Pushed image to 1 destinations 

$

To clean these up, we remove just the PipelineRun object (below we use the alias pr):

$ kubectl get pr

NAME                         SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
clone-build-push-run-v5p4z   True        Succeeded   26m         10m

$ kubectl delete pr clone-build-push-run-v5p4z

pipelinerun.tekton.dev "clone-build-push-run-v5p4z" deleted

$

If you ever needed to debug the PipelineRun, you can see relevant information using kubectl describe.

Now, how about we add an SBOM to the mix?

Add an SBOM to the Pipeline

Expanding a pipeline in Tekton is a matter of adding the appropriate task to the pipeline. For SBOMs, Anchore has prepared a syft task which you can install onto your cluster, documented here: https://hub.tekton.dev/tekton/task/syft. The main purpose of this task is to give your CI/CD pipelines the ability to automatically create a new SBOM after a container is created (or at any point of your pipeline, really).

Install the syft Tekton task:

$ kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/syft/0.1/syft.yaml

task.tekton.dev/syft created

$

The syft Task takes an array of arguments accepted by the standalone syft binary.

Next, modify the Pipeline so it now has a syft Task:

$ nano sbom-pipeline.yaml && cat $_

apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: clone-build-push
spec:
  description: |
    This pipeline clones a git repo, builds a Docker image with Kaniko and
    pushes it to a registry
  params:
  - name: context-path
    type: string
  - name: dockerfile-path
    type: string
  - name: extra-args
    type: array
  - name: image-reference
    type: string
  - name: repo-url
    type: string
  - name: syft-args                        # add this
    type: array                            # add this
  workspaces:
  - name: shared-data
  tasks:
  - name: fetch-source
    taskRef:
      name: git-clone
    workspaces:
    - name: output
      workspace: shared-data
    params:
    - name: url
      value: $(params.repo-url)
  - name: build-push
    runAfter: ["fetch-source"]
    taskRef:
      name: kaniko
    workspaces:
    - name: source
      workspace: shared-data
    params:
    - name: CONTEXT
      value: $(params.context-path)
    - name: DOCKERFILE
      value: $(params.dockerfile-path)
    - name: EXTRA_ARGS
      value: ["$(params.extra-args[*])"]
    - name: IMAGE
      value: $(params.image-reference)
  - name: syft-sbom                        # add everything from here down
    runAfter: ["build-push"]
    taskRef:
      name: syft
    workspaces:
    - name: source-dir
      workspace: shared-data
    params:
    - name: ARGS
      value: ["$(params.syft-args[*])"]

$

This new Task will use the shared-data workspace and consumes an array of arguments known as syft-args. The arguments are passed into the task’s ARGS variable.

Apply the updated Pipeline:

$ kubectl apply -f sbom-pipeline.yaml

pipeline.tekton.dev/clone-build-push configured

$

Next, create a PipelineRun that accommodates for the new argument (ARGS) from syft:

$ nano sbom-pipelinerun.yaml && cat $_

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  generateName: clone-build-push-run-
spec:
  pipelineRef:
    name: clone-build-push
  taskRunTemplate:
    podTemplate:
      securityContext:
        fsGroup: 65532
  workspaces:
  - name: shared-data
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  params:
  - name: context-path
    value: ./python
  - name: dockerfile-path
    value: ./python/Dockerfile
  - name: extra-args
    value:
    - --insecure=true
  - name: image-reference
    value: reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
  - name: repo-url
    value: https://github.com/RX-M/hostinfo.git
  - name: syft-args                                  # add this and everything below
    value:
    - reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton

$

By providing just the image as part of the arguments, syft will generate an SBOM in its own format and output it its container’s stdout stream.

Create the new PipelineRun and watch the pods again using the -w:

$ kubectl create -f sbom-pipelinerun.yaml ; kubectl get po -w

pipelinerun.tekton.dev/clone-build-push-run-dztl4 created
NAME                                          READY   STATUS      RESTARTS   AGE
affinity-assistant-89945f810c-0               1/1     Running     0          11s
clone-build-push-run-dztl4-build-push-pod     0/2     Init:1/3    0          3s
clone-build-push-run-dztl4-fetch-source-pod   0/1     Completed   0          11s
clone-build-push-run-dztl4-build-push-pod     0/2     Init:2/3    0          4s
clone-build-push-run-dztl4-build-push-pod     0/2     PodInitializing   0          5s
clone-build-push-run-dztl4-build-push-pod     2/2     Running           0          6s
clone-build-push-run-dztl4-build-push-pod     2/2     Running           0          6s
clone-build-push-run-dztl4-build-push-pod     1/2     NotReady          0          117s
clone-build-push-run-dztl4-build-push-pod     0/2     Completed         0          118s
clone-build-push-run-dztl4-build-push-pod     0/2     Completed         0          119s
clone-build-push-run-dztl4-build-push-pod     0/2     Completed         0          2m
clone-build-push-run-dztl4-syft-sbom-pod      0/1     Pending           0          0s
clone-build-push-run-dztl4-syft-sbom-pod      0/1     Pending           0          0s
clone-build-push-run-dztl4-syft-sbom-pod      0/1     Init:0/2          0          0s
clone-build-push-run-dztl4-syft-sbom-pod      0/1     Init:0/2          0          1s
clone-build-push-run-dztl4-syft-sbom-pod      0/1     Init:1/2          0          2s
clone-build-push-run-dztl4-syft-sbom-pod      0/1     PodInitializing   0          3s
clone-build-push-run-dztl4-syft-sbom-pod      1/1     Running           0          5s
clone-build-push-run-dztl4-syft-sbom-pod      1/1     Running           0          5s
clone-build-push-run-dztl4-syft-sbom-pod      0/1     Completed         0          7s
affinity-assistant-89945f810c-0               1/1     Terminating       0          2m15s
affinity-assistant-89945f810c-0               0/1     Terminating       0          2m16s
affinity-assistant-89945f810c-0               0/1     Terminating       0          2m16s
affinity-assistant-89945f810c-0               0/1     Terminating       0          2m16s
affinity-assistant-89945f810c-0               0/1     Terminating       0          2m16s
clone-build-push-run-dztl4-syft-sbom-pod      0/1     Completed         0          8s
clone-build-push-run-dztl4-syft-sbom-pod      0/1     Completed         0          9s

^C


$

That new syft-sbom-pod indicates that Tekton is successfully running the new SBOM-based step!

Once everything completes, check the logs to see if the SBOM was actually created:

$ kubectl logs -l tekton.dev/pipelineTask=syft-sbom

Defaulted container "step-syft" out of: step-syft, prepare (init), working-dir-initializer (init)
python                  3.12.2                binary  
readline                8.2.1-r2              apk     
scanelf                 1.3.7-r2              apk     
setuptools              69.0.3                python  
sqlite-libs             3.44.2-r0             apk     
ssl_client              1.36.1-r15            apk     
tzdata                  2023d-r0              apk     
wheel                   0.42.0                python  
xz-libs                 5.4.5-r0              apk     
zlib                    1.3.1-r0              apk

$

Great! Syft is now properly running and generates a SBOM.

Remember that with SBOM, machine readability and access to the SBOM document are key. So, with that in mind, we will want to produce an artifact that we can sign later and push. In fact, say you have a requirement to provide a SPDX-formatted JSON SBOM.

Since the syft Task takes arguments accepted by the standalone binary, those can easily be added as arguments to the PipelineRun:

$ nano sbom-pipelinerun.yaml && cat $_

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  generateName: clone-build-push-run-
spec:
  pipelineRef:
    name: clone-build-push
  taskRunTemplate:
    podTemplate:
      securityContext:
        fsGroup: 65532
  workspaces:
  - name: shared-data
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  params:
  - name: context-path
    value: ./python
  - name: dockerfile-path
    value: ./python/Dockerfile
  - name: extra-args
    value:
    - --insecure=true
  - name: image-reference
    value: reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
  - name: repo-url
    value: https://github.com/RX-M/hostinfo.git
  - name: syft-args
    value:
    - reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton
    - -o                                             # add this argument and the one below it
    - spdx-json

$

With these arguments, syft will generate the SBOM in the SPDX JSON format.

Delete the old PipelineRun and create the new one:

$ kubectl delete pr --all; sleep 5; kubectl create -f sbom-pipelinerun.yaml

pipelinerun.tekton.dev "clone-build-push-run-dztl4" deleted
pipelinerun.tekton.dev/clone-build-push-run-tqldn created

$

Watch the PipelineRun’s status on the API:

$ kubectl get pr -w

NAME                         SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
clone-build-push-run-tqldn   Unknown     Running     40s
clone-build-push-run-tqldn   Unknown     Running     87s
clone-build-push-run-tqldn   True        Succeeded   92s         0s

^C

$

ctrl c after you see it in the Suceeded Reason. Examine the logs of the syft-sbom pod:

$ kubectl logs $(kubectl get po -l tekton.dev/pipelineTask=syft-sbom -o name)

Defaulted container "step-syft" out of: step-syft, prepare (init), working-dir-initializer (init)

{
 "spdxVersion": "SPDX-2.3",
 "dataLicense": "CC0-1.0",
 "SPDXID": "SPDXRef-DOCUMENT",
 "name": "reg-docker-registry.registry.svc.cluster.local/hostinfo:tekton",
 "documentNamespace": "https://anchore.com/syft/image/reg-docker-registry.registry.svc.cluster.local/hostinfo-tekton-a20e356f-8006-46b1-838c-84ee3d4a7982",
 "creationInfo": {
  "licenseListVersion": "3.21",
  "creators": [
   "Organization: Anchore, Inc",
   "Tool: syft-0.85.0"
  ],
  "created": "2024-03-14T17:25:22Z"
 },
 "packages": [
  {
   "name": ".python-rundeps",
   "SPDXID": "SPDXRef-Package-apk-.python-rundeps-0ed183b6f816579a",
   "versionInfo": "20240207.221705",
   "downloadLocation": "NONE",
   "filesAnalyzed": false,
   "sourceInfo": "acquired package info from APK DB: /lib/apk/db/installed",
   "licenseConcluded": "NOASSERTION",
   "licenseDeclared": "NOASSERTION",
   "copyrightText": "NOASSERTION",
   "description": "virtual meta package",
   "externalRefs": [
    {
     "referenceCategory": "SECURITY",
     "referenceType": "cpe23Type",
     "referenceLocator": "cpe:2.3:a:.python-rundeps:.python-rundeps:20240207.221705:*:*:*:*:*:*:*"
    },

...

Now that you have created the SBOM you can do a variety of next steps:

  • Save it as a file rather than echoing it to stdout
  • Sign the SBOM with Cosign and push it to a container registry
  • Generate an attestation with in-toto to help assure your consumers that they can trust your SBOM

We will do all of these to get as close to SLSA Level 3 as possible; to do so we will need Tekton Chains.

Set up Tekton Chains

Tekton Chains is a Kubernetes CRD controller that observes all Tekton TaskRun executions in your cluster, then, when a TaskRun completes, Chains takes a snapshot of it. Chains then converts this snapshot to one or more standard payload formats, signs it and stores them somewhere.

Current features include:

  • Signing TaskRun results and OCI Images with user provided cryptographic keys
  • Attestation formats like in-toto and SLSA
  • Signing with a variety of cryptographic key types and services (x509, KMS)
  • Support for multiple storage backends for signatures

We can install Chains in much the same way we installed Tekton Pipelines. Apply the GA release manifests for Tekton Chains to your Kubernetes cluster:

$ kubectl apply -f https://storage.googleapis.com/tekton-releases/chains/latest/release.yaml

namespace/tekton-chains created
secret/signing-secrets created
configmap/chains-config created
deployment.apps/tekton-chains-controller created
clusterrolebinding.rbac.authorization.k8s.io/tekton-chains-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-chains-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-chains-controller-tenant-access created
clusterrolebinding.rbac.authorization.k8s.io/tekton-chains-controller-tenant-access created
serviceaccount/tekton-chains-controller created
role.rbac.authorization.k8s.io/tekton-chains-leader-election created
rolebinding.rbac.authorization.k8s.io/tekton-chains-controller-leaderelection created
role.rbac.authorization.k8s.io/tekton-chains-info created
rolebinding.rbac.authorization.k8s.io/tekton-chains-info created
configmap/chains-info created
configmap/config-logging created
configmap/tekton-chains-config-observability created
service/tekton-chains-metrics created

$

Like Tekton Pipelines, Tekton Chains creates its own namespace, tekton-chains.

To verify the installation was successful, wait until the Tekton Chains Controller has STATUSRunning“:

$ kubectl get po -n tekton-chains

NAME                                        READY   STATUS    RESTARTS   AGE
tekton-chains-controller-84c978f497-5mcvg   1/1     Running   0          33s

$

Give the Chains Controller a SVID

In previous blogs, we deployed SPIRE and Vault on our FRSCA Kubernetes cluster, established trust between our SPIRE server and our Vault server, and used a SPIFFE Verifiable Identity Document (SVID) to retrieve a Vault secret. In this step we will create a registration entry in our SPIRE server for the Kubernetes Service Account used by the Tekton Chains controller so that it too can use a SVID to retrieve Vault secrets.

Make sure you change the cluster name frsca.rx-m.net in the example below to your cluster name.

$ kubectl exec -n spire spire-server-0 -- \
/opt/spire/bin/spire-server entry create \
-spiffeID spiffe://frsca.rx-m.net/ns/tekton-chains/sa/tekton-chains-controller \
-parentID spiffe://frsca.rx-m.net/ns/spire/sa/spire-agent \
-selector k8s:ns:tekton-chains \
-selector k8s:sa:tekton-chains-controller

Defaulted container "spire-server" out of: spire-server, spire-oidc
Entry ID         : 37cc234e-4cd8-43fe-9336-e392ca49044e
SPIFFE ID        : spiffe://frsca.rx-m.net/ns/tekton-chains/sa/tekton-chains-controller
Parent ID        : spiffe://frsca.rx-m.net/ns/spire/sa/spire-agent
Revision         : 0
X509-SVID TTL    : default
JWT-SVID TTL     : default
Selector         : k8s:ns:tekton-chains
Selector         : k8s:sa:tekton-chains-controller

$

With our registration complete, next we configure Vault to trust the Tekton Chains controller’s SVID.

Enable Vault Transit

In this step we will enable the Vault Transit engine to perform secretless/keyless code signing with Tekton Chains. The primary use case for Transit is to encrypt data from applications while still storing that encrypted data in some primary data store. Meaning, keys never leave the Vault. Instead the data is sent to Vault to get encrypted/decrypted/signed/verified. This resolves the issue of having signing keys on a local machine and resolves the issue of managing K8s secrets to access the signing keys. Instead we utilize the SVID of the Chains controller to authenticate against Vault for signed provenance.

Exec an interactive shell in the Vault pod:

$ kubectl exec -n vault -it vault-0 --  /bin/sh

/ $

You should not need to login again but if you do, take the initial Vault root token from the Initialization and export it as an environment variable, then use vault login to authenticate with the server:

```
/ $ export VAULT_ROOT_KEY=hvs.UoYoI2i…

/ $ vault login $VAULT_ROOT_KEY

Success! You are now authenticated…

/ $

First we will update the JWT config we created in our Vault blog so that the default_role is set to the spire-chains-controller, a role we will create subsequently.

/ $ vault write auth/jwt/config oidc_discovery_url=https://oidc-discovery.rx-m.net default_role="spire-chains-controller"

Success! Data written to: auth/jwt/config

/ $ vault read auth/jwt/config

Key                       Value
---                       -----
bound_issuer              n/a
default_role              spire-chains-controller
jwks_ca_pem               n/a
jwks_url                  n/a
jwt_supported_algs        []
jwt_validation_pubkeys    []
namespace_in_state        true
oidc_client_id            n/a
oidc_discovery_ca_pem     n/a
oidc_discovery_url        https://oidc-discovery.rx-m.net
oidc_response_mode        n/a
oidc_response_types       []
provider_config           map[]

/ $

Now, create the role, ensuring that the bound_subject matches the value we used for the SPIFFIE ID (-spiffeID) with the spire-server entry create command from the previous section (spiffe://frsca.rx-m.net/ns/tekton-chains/sa/tekton-chains-controller in our case)

Make sure you change the cluster name frsca.rx-m.net in the example below to your cluster name.

/ $ vault write auth/jwt/role/spire-chains-controller \
role_type=jwt \
user_claim=sub \
bound_audiences=BLOG \
bound_subject=spiffe://frsca.rx-m.net/ns/tekton-chains/sa/tekton-chains-controller \
token_ttl=15m \
token_policies=spire-transit

Success! Data written to: auth/jwt/role/spire-chains-controller

/ $

Enable the transit engine:

/ $ vault secrets enable transit

Success! Enabled the transit secrets engine at: transit/

/ $

Write the spire-transit configuration that enables access to the transit engine for the “frsca” key, which we will create immediately afterwards.

/ $ vault policy write spire-transit - <<EOF
path "transit/*" {
  capabilities = ["read"]
}
path "transit/sign/frsca" {
  capabilities = ["create", "read", "update"]
}
path "transit/sign/frsca/*" {
  capabilities = ["read", "update"]
}
path "transit/verify/frsca" {
  capabilities = ["create", "read", "update"]
}
path "transit/verify/frsca/*" {
  capabilities = ["read", "update"]
}
EOF

Success! Uploaded policy: spire-transit

/ $

Generate the transit key:

/ $ vault write transit/keys/frsca type=ecdsa-p521

Key                       Value
---                       -----
allow_plaintext_backup    false
auto_rotate_period        0s
deletion_allowed          false
derived                   false
exportable                false
imported_key              false
keys                      map[1:map[certificate_chain: creation_time:2024-03-15T04:34:59.042408917Z name:P-521 public_key:-----BEGIN PUBLIC KEY-----
MIGbMBAGByqGSM49AgEGBSuBBAAjA...
-----END PUBLIC KEY-----
]]
latest_version            1
min_available_version     0
min_decryption_version    1
min_encryption_version    0
name                      frsca
supports_decryption       false
supports_derivation       false
supports_encryption       false
supports_signing          true
type                      ecdsa-p521

/ $

Exit your interactive exec session (we need to use tools that aren’t in the Vault container).

/ $ exit

$

Use a one-time kubectl exec command to read the key, parse it with jq and store it in a local file on the bastion server:

$ kubectl exec -i -n vault vault-0 -- /bin/sh -c "vault read -format=json transit/keys/frsca" | jq -r .data.keys.\"1\".public_key > "frsca.pem"

$

Create a configmap from the local file:

$ kubectl -n vault create configmap frsca-certs --from-file=frsca.pem

configmap/frsca-certs created

$

Finally, create the signing secret from the same file:

$ kubectl -n tekton-chains create secret generic signing-secrets --from-file=cosign.pub=frsca.pem

secret/signing-secrets created

$

Configure Chains

Tekton Chains creates the provenance for Task and Pipeline runs, then signs it using our secure private key. Chains then uploads the signed provenance to a user-specified location. Chains can be configured to upload to various systems:

  • OCI compliant registry, convenient because image and provenance can be stored together
  • A backend implementing the Grafeas API, defined by Google for storing provenance
  • A Google Cloud Storage Bucket, standard object storage
  • A Firestore document store
  • Others

To update Tekton Chains we will patch its configmap with a yaml file that:

  • Updates the storage for an OCI registry
  • Sets the attestation format to SLSA
  • Sets the signer to kms and points to Vault’s ClusterIP
  • Uses Vault keys with cosign for signing and verification
    • The URI format for Hashicorp Vault KMS is: hashivault://$keyname, in our case hashivault://frsca (the key we created in Vault earlier)
  • Specifies the OIDC role to the JWT config in Vault, linked to our SVID
$ nano chains-patch-config.yaml && cat $_

data:
  artifacts.taskrun.storage: tekton,oci
  artifacts.taskrun.format: slsa/v1
  artifacts.pipelinerun.storage: tekton,oci
  artifacts.pipelinerun.format: slsa/v1
  artifacts.oci.signer: kms
  artifacts.taskrun.signer: kms
  artifacts.pipelinerun.signer: kms
  signers.kms.kmsref: "hashivault://frsca"
  signers.kms.auth.address: "http://vault.vault:8200"
  signers.kms.auth.oidc.path: jwt
  signers.kms.auth.oidc.role: "spire-chains-controller"
  signers.kms.auth.spire.sock: "unix:///spiffe-workload-api/agent.sock"
  signers.kms.auth.spire.audience: BLOG

$

Use the file to patch the configmap:

$ kubectl -n tekton-chains patch cm chains-config --patch-file chains-patch-config.yaml

configmap/chains-config patched

$

Now, edit the tekton-chains-controller deployment as follows:

  • Under spec.template.containers[0].volumeMounts for the only container add a volume mount for the spire-agent-socket so that Chains can use the SPIRE Workload API
  • Under spec.template.volumes add the spire-agent-socket as a hostPath volume (our SPIRE agent DaemonSet is currently using hostPath instead of the SPIRE CSI driver but we may change that later)
$ kubectl -n tekton-chains edit deploy tekton-chains-controller

…

        volumeMounts:
        - mountPath: /etc/signing-secrets
          name: signing-secrets
        - mountPath: /var/run/sigstore/cosign
          name: oidc-info
        - mountPath: /spiffe-workload-api          # Add this
          name: spire-agent-socket                 # Add this

...

      volumes:
      - name: signing-secrets
        secret:
          defaultMode: 420
          secretName: signing-secrets
      - name: oidc-info
        projected:
          defaultMode: 420
          sources:
          - serviceAccountToken:
              audience: sigstore
              expirationSeconds: 600
              path: oidc-token
      - hostPath:                                  # Add this
          path: /run/spire/sockets                 # Add this
        name: spire-agent-socket                   # Add this

Save your edits and exit the editor.

The Chains Controller runs under a Kubernetes Deployment and should start a rolling update:

$ kubectl -n tekton-chains get deploy,po

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tekton-chains-controller   1/1     1            1           25h

NAME                                            READY   STATUS              RESTARTS   AGE
pod/tekton-chains-controller-84c978f497-x2l2b   1/1     Running             0          25h
pod/tekton-chains-controller-d5cb79688-24gc7    0/1     ContainerCreating   0          3s

$

Success! Now the Tekton Chains controller pod has access to the SPIRE workload API and has the configuration to use it!

SLSA Level 2 Pipeline

With the prerequisites in place we can turn our focus to creating a Pipeline that generates our SBOM and attestations, signs them, and stores them in our on-cluster registry. First we need to replace the basic syft task with one that can perform all that. Create the following Task:

$ nano demo-bom-task-syft.yaml && cat $_

apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: demo-syft-bom-generator
spec:
  params:
  - name: image-ref
    type: string
  - name: image-digest
    type: string
  - default: frsca-sbom.json
    description: filepath to store the sbom artifacts
    name: sbom-filepath
    type: string
  - default: "true"
    name: syft-http
    type: string
  - default: "debug"
    name: syft-log-level
    type: string
  - default: "true"
    name: syft-skip-tls
    type: string
  results:
  - description: status of syft task, possible value are-success|failure
    name: status
    type: string
  - description: name of the uploaded SBOM artifact
    name: SBOM_IMAGE_URL
    type: string
  - description: digest of the uploaded SBOM artifact
    name: SBOM_IMAGE_DIGEST
    type: string
  stepTemplate:
    computeResources: {}
    env:
    - name: SYFT_LOG_LEVEL
      value: $(params.syft-log-level)
    - name: SYFT_REGISTRY_INSECURE_SKIP_TLS_VERIFY
      value: $(params.syft-skip-tls)
    - name: SYFT_REGISTRY_INSECURE_USE_HTTP
      value: $(params.syft-http)
  steps:
  - args:
    - -o
    - spdx-json
    - --file
    - $(workspaces.source.path)/$(params.sbom-filepath)
    - $(params.image-ref)
    image: anchore/syft:v0.58.0@sha256:b764278a9a45f3493b78b8708a4d68447807397fe8c8f59bf21f18c9bee4be94
    name: syft-bom-generator
  - args:
    - attach
    - sbom
    - --sbom
    - $(workspaces.source.path)/$(params.sbom-filepath)
    - --type
    - spdx
    - $(params.image-ref)
    image: gcr.io/projectsigstore/cosign:v2.0.0@sha256:728944a9542a7235b4358c4ab2bcea855840e9d4b9594febca5c2207f5da7f38
    name: attach-sbom
  workspaces:
  - name: source

$

Note the args for the steps; rather than outputting the SBOM to stdout like we did before, we are configuring the Task to output files which will get pushed to our registry.

Apply the new Task to the cluster:

$ kubectl apply -f demo-bom-task-syft.yaml

task.tekton.dev/demo-syft-bom-generator created

$

Our new Pipeline will reference this task as well as the git-clone and kaniko tasks we used earlier. Create the Pipeline:

$ nano demo-pipeline.yaml && cat $_

apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: demo-pipeline
spec:
  params:
  - name: context-path
    type: string
  - name: dockerfile-path
    type: string
  - name: extra-args
    type: array
  - name: image
    type: string
  - name: imageRepo
    type: string
  - name: imageTag
    type: string
  - name: SOURCE_URL
    type: string
  - name: syft-skip-tls
    type: string
  - name: syft-http
    type: string
  tasks:
  - name: clone-repo
    params:
    - name: url
      value: $(params.SOURCE_URL)
    - name: deleteExisting
      value: "true"
    taskRef:
      kind: Task
      name: git-clone
    workspaces:
    - name: output
      workspace: git-source
  - name: build-and-push-image
    params:
    - name: CONTEXT
      value: $(params.context-path)
    - name: DOCKERFILE
      value: $(params.dockerfile-path)
    - name: EXTRA_ARGS
      value: ["$(params.extra-args[*])"]
    - name: IMAGE
      value: $(params.image)
    runAfter:
    - clone-repo
    taskRef:
      kind: Task
      name: kaniko
    workspaces:
    - name: source
      workspace: git-source
  - name: generate-bom
    params:
    - name: image-ref
      value: $(params.image)
    - name: image-digest
      value: $(tasks.build-and-push-image.results.IMAGE_DIGEST)
    - name: syft-skip-tls
      value: $(params.syft-skip-tls)
    - name: syft-http
      value: $(params.syft-http)
    runAfter:
    - build-and-push-image
    taskRef:
      kind: Task
      name: demo-syft-bom-generator
    workspaces:
    - name: source
      workspace: git-source
  workspaces:
  - name: git-source

$

Like before, this Pipeline clones the source repo, builds and pushes the image, and generates the SBOM. The difference is that Chains will attest to these steps, sign the attestation and add the attestation to the registry.

Apply the Pipeline to the cluster:

$ kubectl apply -f demo-pipeline.yaml 

pipeline.tekton.dev/demo-pipeline created

$

Last step, create the PipelineRun:

$ nano demo-pipelinerun.yaml && cat $_

apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  generateName: demo-pipeline-run-
spec:
  params:
  - name: context-path
    value: ./python
  - name: dockerfile-path
    value: ./python/Dockerfile
  - name: extra-args
    value:
    - --insecure=true
    - --verbosity=debug
  - name: image
    value: reg-docker-registry.registry.svc.cluster.local/hostinfo:slsa2
  - name: imageRepo
    value: reg-docker-registry.registry.svc.cluster.local/hostinfo
  - name: imageTag
    value: slsa2
  - name: SOURCE_URL
    value: https://github.com/RX-M/hostinfo.git
  - name: syft-skip-tls
    value: true
  - name: syft-http
    value: true
  pipelineRef:
    name: demo-pipeline
  taskRunTemplate:
    podTemplate:
      securityContext:
        fsGroup: 65532
  timeouts:
    pipeline: 1h0m0s
  workspaces:
  - name: git-source
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi

$

Several of the params in this PipelineRun let us push the artifacts to our on-cluster registry which is insecure. Again, for the purposes of this blog that lets us keep things self-contained but when setting this up for a secure registry you should remove the --insecure flag:

  - name: extra-args
    value:
    - --insecure=true

Also set the following parameters to false:

  - name: syft-skip-tls
    value: false
  - name: syft-http
    value: false

Looking back at the Task, these parameters hydrate the following environment variables:

    - name: SYFT_REGISTRY_INSECURE_SKIP_TLS_VERIFY
      value: $(params.syft-skip-tls)
    - name: SYFT_REGISTRY_INSECURE_USE_HTTP
      value: $(params.syft-http)

If you wanted to make sure that an insecure registry was never used you would need to modify the Task, the Pipeline, and the PipelineRun to remove all the insecure references. Those modifications are beyond the scope of this blog.

Create your PipelineRun:

$ kubectl create -f demo-pipelinerun.yaml

pipelinerun.tekton.dev/demo-pipeline-run-qggsq created

$

Get your PipelineRun (pr), TaskRuns (tr), and pods (po):

$ kubectl get pr,tr,po

NAME                                             SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
pipelinerun.tekton.dev/demo-pipeline-run-qggsq   True        Succeeded   52s         16s

NAME                                                              SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
taskrun.tekton.dev/demo-pipeline-run-qggsq-build-and-push-image   True        Succeeded   38s         25s
taskrun.tekton.dev/demo-pipeline-run-qggsq-clone-repo             True        Succeeded   52s         38s
taskrun.tekton.dev/demo-pipeline-run-qggsq-generate-bom           True        Succeeded   25s         16s

NAME                                                   READY   STATUS      RESTARTS   AGE
pod/demo-pipeline-run-qggsq-build-and-push-image-pod   0/2     Completed   0          38s
pod/demo-pipeline-run-qggsq-clone-repo-pod             0/1     Completed   0          52s
pod/demo-pipeline-run-qggsq-generate-bom-pod           0/2     Completed   0          25s

$

Wait until the PipelineRun and TaskRuns say Succeeded and the pods’ status are Completed (as they are in the example above) and the image and its related metadata files should be in our registry.

To examine the artifacts we will install Crane, a tool created by Google for interacting with remote images and registries. Install Crane using the following commands:

$ VERSION=$(curl -s "https://api.github.com/repos/google/go-containerregistry/releases/latest" | jq -r '.tag_name')

$ OS=Linux 

$ ARCH=x86_64

$ curl -sL "https://github.com/google/go-containerregistry/releases/download/${VERSION}/go-containerregistry_${OS}_${ARCH}.tar.gz" > go-containerregistry.tar.gz

$ sudo tar -zxvf go-containerregistry.tar.gz -C /usr/local/bin/ crane

Now we can use crane ls to list the artifacts in our registry. In the example below we use the LoadBalancer DNS which is reachable from the kOps Bastion server. You can also use the Cluster IP from anywhere inside the cluster.

$ kubectl get svc -n registry

NAME                  TYPE           CLUSTER-IP     EXTERNAL-IP                                                              PORT(S)                      AGE
reg-docker-registry   LoadBalancer   100.65.5.252   a193d0082bec749e2a9224903c3de288-467219098.us-east-1.elb.amazonaws.com   80:30681/TCP,443:32647/TCP   48m

$ crane ls a193d0082bec749e2a9224903c3de288-467219098.us-east-1.elb.amazonaws.com/hostinfo --insecure

slsa2
sha256-d0a09d791bbe93118bce030cf8201418e59b5f84e3ec91f5b4a52adcbb2176e9.sbom
sha256-d0a09d791bbe93118bce030cf8201418e59b5f84e3ec91f5b4a52adcbb2176e9.att
sha256-d0a09d791bbe93118bce030cf8201418e59b5f84e3ec91f5b4a52adcbb2176e9.sig

$

The artifacts in our registry are as follows:

  • slsa2 – this is our tagged image
  • sha256-<hash>.sbom – this is the SBOM generated by the syft task
  • sha256-<hash>.att – the attestation file generated by Chains
  • sha256-<hash>.sig – the signature file

We can download the attestation and verify the signature using Cosign, which should already be installed but if it isn’t you can install it with the following commands:

$ curl -O -L "https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64"

$ sudo mv cosign-linux-amd64 /usr/local/bin/cosign

$ sudo chmod +x /usr/local/bin/cosign

Using the cosign download command download the attestation and decode the payload:

$ cosign download attestation --allow-insecure-registry --allow-http-registry a193d0082bec749e2a9224903c3de288-467219098.us-east-1.elb.amazonaws.com/hostinfo:slsa2 | jq -r .payload | base64 --decode > att.json

Examine the attestation:

$ cat att.json | jq

{
  "_type": "https://in-toto.io/Statement/v0.1",
  "predicateType": "https://slsa.dev/provenance/v0.2",
  "subject": [
    {
      "name": "reg-docker-registry.registry.svc.cluster.local/hostinfo",
      "digest": {
        "sha256": "d0a09d791bbe93118bce030cf8201418e59b5f84e3ec91f5b4a52adcbb2176e9"
      }
    }
  ],
  "predicate": {
    "builder": {
      "id": "https://tekton.dev/chains/v2"
    },
    "buildType": "tekton.dev/v1beta1/TaskRun",
    "invocation": {
      "configSource": {},
      "parameters": {
        "BUILDER_IMAGE": "gcr.io/kaniko-project/executor:v1.5.1@sha256:c6166717f7fe0b7da44908c986137ecfeab21f31ec3992f6e128fff8a94be8a5",
        "CONTEXT": "./python",
        "DOCKERFILE": "./python/Dockerfile",
        "EXTRA_ARGS": [
          "--insecure=true",
          "--verbosity=debug"
        ],
        "IMAGE": "reg-docker-registry.registry.svc.cluster.local/hostinfo:slsa2"
      },
      "environment": {
        "annotations": {
          "pipeline.tekton.dev/affinity-assistant": "affinity-assistant-91cdf251b8",
          "pipeline.tekton.dev/release": "d714545",
          "tekton.dev/categories": "Image Build",
          "tekton.dev/displayName": "Build and upload container image using Kaniko",
          "tekton.dev/pipelines.minVersion": "0.17.0",
          "tekton.dev/platforms": "linux/amd64,linux/arm64,linux/ppc64le",
          "tekton.dev/tags": "image-build"
        },
        "labels": {
          "app.kubernetes.io/managed-by": "tekton-pipelines",
          "app.kubernetes.io/version": "0.6",
          "tekton.dev/memberOf": "tasks",
          "tekton.dev/pipeline": "demo-pipeline",
          "tekton.dev/pipelineRun": "demo-pipeline-run-qggsq",
          "tekton.dev/pipelineTask": "build-and-push-image",
          "tekton.dev/task": "kaniko"
        }
      }
    },
    "buildConfig": {
      "steps": [
        {
          "entryPoint": "",
          "arguments": [
            "--insecure=true",
            "--verbosity=debug",
            "--dockerfile=./python/Dockerfile",
            "--context=/workspace/source/./python",
            "--destination=reg-docker-registry.registry.svc.cluster.local/hostinfo:slsa2",
            "--digest-file=/tekton/results/IMAGE_DIGEST"
          ],
          "environment": {
            "container": "build-and-push",
            "image": "oci://gcr.io/kaniko-project/executor@sha256:c6166717f7fe0b7da44908c986137ecfeab21f31ec3992f6e128fff8a94be8a5"
          },
          "annotations": null
        },
        {
          "entryPoint": "set -e\nimage=\"reg-docker-registry.registry.svc.cluster.local/hostinfo:slsa2\"\necho -n \"${image}\" | tee \"/tekton/results/IMAGE_URL\"\n",
          "arguments": null,
          "environment": {
            "container": "write-url",
            "image": "oci://docker.io/library/bash@sha256:c523c636b722339f41b6a431b44588ab2f762c5de5ec3bd7964420ff982fb1d9"
          },
          "annotations": null
        }
      ]
    },
    "metadata": {
      "buildStartedOn": "2024-05-24T22:32:40Z",
      "buildFinishedOn": "2024-05-24T22:32:53Z",
      "completeness": {
        "parameters": false,
        "environment": false,
        "materials": false
      },
      "reproducible": false
    },
    "materials": [
      {
        "uri": "oci://gcr.io/kaniko-project/executor",
        "digest": {
          "sha256": "c6166717f7fe0b7da44908c986137ecfeab21f31ec3992f6e128fff8a94be8a5"
        }
      },
      {
        "uri": "oci://docker.io/library/bash",
        "digest": {
          "sha256": "c523c636b722339f41b6a431b44588ab2f762c5de5ec3bd7964420ff982fb1d9"
        }
      }
    ]
  }
}

$

We can also verify the signature using the cosign verify command:

$ cosign verify --insecure-ignore-tlog --allow-insecure-registry --key k8s://tekton-chains/signing-secrets a193d0082bec749e2a9224903c3de288-467219098.us-east-1.elb.amazonaws.com/hostinfo:slsa2

WARNING: Skipping tlog verification is an insecure practice that lacks of transparency and auditability verification for the signature.

Verification for a193d0082bec749e2a9224903c3de288-467219098.us-east-1.elb.amazonaws.com/hostinfo:slsa2 --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - The signatures were verified against the specified public key

[{"critical":{"identity":{"docker-reference":"reg-docker-registry.registry.svc.cluster.local/hostinfo"},"image":{"docker-manifest-digest":"sha256:d0a09d791bbe93118bce030cf8201418e59b5f84e3ec91f5b4a52adcbb2176e9"},"type":"cosign container image signature"},"optional":null}]

$

Cosign has verified the signature! We can now provide signed provenance to our users/customers.

Conclusion

Using FRSCA tooling we have achieved SLSA Build Level 2, which as a reminder requires:

All of Build Level 1:

  • Software producer follows a consistent build process so that others can form expectations about what a “correct” build looks like
    • The Dockerfile in our git repo and the Pipeline definition are both considered “consistent build process”
  • Provenance exists
    • The Syft task generates our SBOM and Chains provides the attestations
  • Software producer distributes provenance to consumers, preferably using a convention determined by the package ecosystem
    • Since our FRSCA cluster uses the registry, we are distributing the provenance using a convention of the container ecosystem

Plus Build Level 2:

  • Build platform runs on dedicated infrastructure
    • Our kOps based cluster meets this requirement
  • Provenance is tied to build infrastructure through a digital signature
    • Our provenance is signed by a key that is only accessible to the build platform in the Vault server

At this point this is as far as we can go; SLSA Build Level 3 is not possible in a FRSCA-based cluster at the moment. This is because Chains has no way to verify that a given TaskRun it received wasn’t modified by anybody other than Tekton, during or after execution and Tekton Pipelines can’t verify that the results it reads weren’t modified. This means that currently unfalsifiable provenance is impossible to achieve. The solution is a tighter integration with SPIRE and there is ongoing work in the community to make that happen; check out TEP-0089 for more details.

While this is the last blog in our FRSCA series, the road to unfalsifiable provenance for software artifacts is a marathon and as the projects in the FRSCA architecture evolve and improve we will be following along. Thanks for taking this journey with us!