From f9c92901f12f4086bffe864ee70e771bad3c28ae Mon Sep 17 00:00:00 2001 From: versioning_user Date: Fri, 27 Sep 2024 11:21:06 +0000 Subject: [PATCH] Deployed 62336697 to develop with MkDocs 1.6.1 and mike 2.0.0 --- develop/search/search_index.json | 2 +- develop/validators/sigstore_cosign/index.html | 18 +++++++++--------- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/develop/search/search_index.json b/develop/search/search_index.json index 287669be2..f93f28633 100644 --- a/develop/search/search_index.json +++ b/develop/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to Connaisseur","text":"

A Kubernetes admission controller to integrate container image signature verification and trust pinning into a cluster.

"},{"location":"#what-is-connaisseur","title":"What is Connaisseur?","text":"

Connaisseur ensures integrity and provenance of container images in a Kubernetes cluster. To do so, it intercepts resource creation or update requests sent to the Kubernetes cluster, identifies all container images and verifies their signatures against pre-configured public keys. Based on the result, it either accepts or denies those requests.

Connaisseur is developed under three core values: Security, Usability, Compatibility. It is built to be extendable and currently aims to support the following signing solutions:

It provides several additional features such as:

Feel free to reach out via GitHub Discussions!

"},{"location":"#quick-start","title":"Quick start","text":"

Getting started to verify image signatures is only a matter of minutes:

Warning

Only try this out on a test cluster as deployments with unsigned images will be blocked.

Connaisseur comes pre-configured with public keys for its own repository and Docker's official images (official images can be found here). It can be fully configured via charts/connaisseur/values.yaml. For a quick start, clone the Connaisseur repository:

git clone https://github.com/sse-secure-systems/connaisseur.git\n

Next, install Connaisseur via Helm:

helm install connaisseur helm --atomic --create-namespace --namespace connaisseur\n

Once installation has finished, you are good to go. Successful verification can be tested via official Docker images like hello-world:

kubectl run hello-world --image=docker.io/hello-world\n

Or our signed testimage:

kubectl run demo --image=docker.io/securesystemsengineering/testimage:signed\n

Both will return pod/<name> created. However, when trying to deploy an unsigned image:

kubectl run demo --image=docker.io/securesystemsengineering/testimage:unsigned\n

Connaisseur denies the request and returns an error (...) Unable to find signed digest (...). Since the images above are signed using Docker Content Trust, you can inspect the trust data using docker trust inspect --pretty <image-name>.

To uninstall Connaisseur use:

helm uninstall connaisseur --namespace connaisseur\n

Congrats you just validated the first images in your cluster! To get started configuring and verifying your own images and signatures, please follow our setup guide.

"},{"location":"#how-does-it-work","title":"How does it work?","text":"

Integrity and provenance of container images deployed to a Kubernetes cluster can be ensured via digital signatures. On a very basic level, this requires two steps:

  1. Signing container images after building
  2. Verifying the image signatures before deployment

Connaisseur aims to solve step two. This is achieved by implementing several validators, i.e. configurable signature verification modules for different signing solutions (e.g. Notary V1). While the detailed security considerations mainly depend on the applied solution, Connaisseur in general verifies the signature over the container image content against a trust anchor or trust root (e.g. public key) and thus let's you ensure that images have not been tampered with (integrity) and come from a valid source (provenance).

"},{"location":"#trusted-digests","title":"Trusted digests","text":"

But what is actually verified? Container images can be referenced in two different ways based on their registry, repository, image name (<registry>/<repository>/<image name>) followed by either tag or digest:

While the tag is a mutable, human readable description, the digest is an immutable, inherent property of the image, namely the SHA256 hash of its content. This also means that a tag can correspond to varying digests whereas digests are unique for each image. The container runtime (e.g. containerd) compares the image content with the received digest before spinning up the container. As a result, Connaisseur just needs to make sure that only trusted digests (signed by a trusted entity) are passed to the container runtime. Depending on how an image for deployment is referenced, it will either attempt to translate the tag to a trusted digest or validate whether the digest is trusted. How the digest is signed in detail, where the signature is stored, what it is verfied against and how different image distribution and updating attacks are mitigated depends on the signature solutions.

"},{"location":"#mutating-admission-controller","title":"Mutating admission controller","text":"

How to validate images before deployment to a cluster? The Kubernetes API is the fundamental fabric behind the control plane. It allows operators and cluster components to communicate with each other and, for example, query, create, modify or delete Kubernetes resources. Each request passes through several phases such as authentication and authorization before it is persisted to etcd. Among those phases are two steps of admission control: mutating and validating admission. In those phases the API sends admission requests to configured webhooks (admission controllers) and receives admission responses (admit, deny, or modify). Connaisseur uses a mutating admission webhook, as requests are not only admitted or denied based on the validation result but might also require modification of contained images referenced by tags to trusted digests. The webhook is configured to only forward resource creation or update requests to the Connaisseur service running inside the cluster, since only deployments of images to the cluster are relevant for signature verification. This allows Connaisseur to intercept requests before deployment and based on the validation:

"},{"location":"#image-policy-and-validators","title":"Image policy and validators","text":"

Now, how does Connaisseur process admission requests? A newly received request is first inspected for container image references that need to be validated (1). The resulting list of images referenced by tag or digest is passed to the image policy (2). The image policy matches the identified images to the configured validators and corresponding trust roots (e.g. public keys) to be used for verification. Image policy and validator configuration form the central logic behind Connaisseur and are described in detail und basics. Validation is the step where the actual signature verification takes place (3). For each image, the required trust data is retrieved from external sources such as Notary server, registry or sigstore transparency log and validated against the pre-configured trust root (e.g. public key). This forms the basis for deciding on the request (4). In case no trusted digest is found for any of the images (i.e. either no signed digest available or no signature matching the public key), the whole request is denied. Otherwise, Connaisseur translates all image references in the original request to trusted digests and admits it (5).

"},{"location":"#compatibility","title":"Compatibility","text":"

Supported signature solutions and configuration options are documented under validators.

Connaisseur supports Kubernets v1.16 and higher. It is expected to be compatible with most Kubernetes services and has been successfully tested with:

All registry interactions use the OCI Distribution Specification that is based on the Docker Registry HTTP API V2 which is the standard for all common image registries. For using Notary (V1) as a signature solution, only some registries provide the required Notary server attached to the registry with e.g. shared authentication. Connaisseur has been tested with the following Notary (V1) supporting image registries:

In case you identify any incompatibilities, please create an issue

"},{"location":"#versions","title":"Versions","text":"

The latest stable version of Connaisseur is available on the master branch. Releases follow semantic versioning standards to facilitate compatibility. For each release, a signed container image tagged with the version is published in the Connaisseur Docker Hub repository. Latest developments are available on the develop branch, but should be considered unstable and no pre-built container image is provided.

"},{"location":"#development","title":"Development","text":"

Connaisseur is open source and open development. We try to make major changes transparent via Architecture Decision Records (ADRs) and announce developments via GitHub Discussions. Information on responsible disclosure of vulnerabilities and tracking of past findings is available in the Security Policy. Bug reports should be filed as GitHub issues to share status and potential fixes with other users.

We hope to get as many direct contributions and insights from the community as possible to steer further development. Please refer to our contributing guide, create an issue or reach out to us via GitHub Discussions

"},{"location":"#wall-of-fame","title":"Wall of fame","text":"

Thanks to all the fine people directly contributing commits/PRs to Connaisseur:

Big shout-out also to all who support the project via issues, discussions and feature requests

"},{"location":"#resources","title":"Resources","text":"

Several resources are available to learn more about Connaisseur and related topics:

"},{"location":"CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#our-pledge","title":"Our pledge","text":"

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

"},{"location":"CODE_OF_CONDUCT/#our-standards","title":"Our standards","text":"

Examples of behavior that contributes to creating a positive environment include:

Examples of unacceptable behavior by participants include:

"},{"location":"CODE_OF_CONDUCT/#our-responsibilities","title":"Our responsibilities","text":"

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

"},{"location":"CODE_OF_CONDUCT/#scope","title":"Scope","text":"

This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

"},{"location":"CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at connaisseur@securesystems.dev. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.

"},{"location":"CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq

"},{"location":"CONTRIBUTING/","title":"Contributing","text":"

We hope to steer development of Connaisseur from demand of the community and are excited about direct contributions to improve the tool!

The following guide is meant to help you get started with contributing to Connaisseur. In case of questions or feedback, feel free to reach out to us.

We are committed to positive interactions between all contributors of the project. To ensure this, please follow the Code of Conduct in all communications.

"},{"location":"CONTRIBUTING/#discuss-problems-raise-bugs-and-propose-feature-ideas","title":"Discuss problems, raise bugs and propose feature ideas","text":"

We are happy you made it here! In case you want to share your feedback, need support, want to discuss issues from using Connaisseur in your own projects, have ideas for new features or just want to connect with us, please reach out via GitHub Discussions. If you want to raise any bugs you found or make a feature request, feel free to create an issue with an informative title and description.

While issues are a great way to discuss problems, bugs and new features, a direct proposal via a pull request can sometimes say more than a thousand words. So be bold and contribute to the code as described in the next section!

In case you require a more private communication, you can reach us via connaisseur@securesystems.dev.

"},{"location":"CONTRIBUTING/#contribute-to-source-code","title":"Contribute to source code","text":"

The following steps will help you make code contributions to Connaisseur and ensure good code quality and workflow. This includes the following steps:

  1. Set up your environment: Set up up your local environment to best interact with the code. Further information is given below.
  2. Make atomic changes: Changes should be atomic. As such, pull requests should contain only few commits, and each commit should only fix one issue or implement one feature, with a concise commit message.
  3. Test your changes: Test any changes locally for code quality and functionality and add new tests for any additional code. How to test is described below.
  4. Create semantic, conventional and signed commits: Any commits should follow a simple semantic convention to help structure the work on Connaisseur. The convention is described below. For security reasons and since integrity is at the core of this project, code merged into master must be signed. How we achieve this is described below.
  5. Create pull requests: We consider code review central to quality and security of code. Therefore, a pull request (PR) to the develop branch should be created for each contribution. It will be reviewed, and potential improvements may be discussed within the PR. After approval, changes will be merged and moved to the master branch with the next release.
"},{"location":"CONTRIBUTING/#set-up-the-environment","title":"Set up the environment","text":"

To start contributing, you will need to set up your local environment. First step is to get the source code by cloning this repository:

git clone git@github.com:sse-secure-systems/connaisseur.git\n
In order to review the effects of your changes, you should create your own Kubernetes cluster and install Connaisseur. This is described in the getting started. A simple starting point may be a minikube cluster with e.g. a Docker Hub repository for maintaining your test images and trust data.

In case you make changes to the Connaisseur container image itself or code for that matter, you need to re-build the image and install it locally for testing. This requires a few steps:

  1. Get the Connaisseur image ready:
    • Using minikube, the local environment needs to be configured to use the minikube Docker daemon before building the image:
      1. Run eval $(minikube docker-env).
      2. Run make docker.
    • Using kind, the image needs to be built first and then loaded onto the kind node:
      1. Run make docker.
      2. Run IMAGE_REPO=$(yq e '.kubernetes.deployment.image.repository' charts/connaisseur/values.yaml) && VERSION=$(yq e '.appVersion' charts/connaisseur/Chart.yaml).
      3. Run kind load docker-image ${IMAGE_REPO}:v${VERSION}.
  2. Install Connaisseur via make install-dev.
"},{"location":"CONTRIBUTING/#test-changes","title":"Test changes","text":"

Tests and linting are important to ensure code quality, functionality and security. We therefore aim to keep the code coverage high. We are running several automated tests in the CI pipeline. Application code is tested via Go's testing package and linted via golangci-lint and gosec. When making changes to the application code, please directly provide tests for your changes.

Changes can and should be tested locally via running make test.

Linters can be run locally via

docker run --rm -v $(pwd):/app -w /app/cmd/connaisseur securego/gosec gosec ./...\ndocker run --rm -v $(pwd):/app -w /app golangci/golangci-lint golangci-lint run -v --timeout=10m --skip-dirs=\"test\"\n

from the root folder.

This helps identify bugs in changes before pushing.

INFO We believe that testing should not only ensure functionality, but also aim to test for expected security issues like injections and appreciate if security tests are added with new functionalities.

Besides the unit testing and before any PR can be merged, an integration test is carried out whereby:

You can also run this integration test on a local cluster. There is a more detailed guided on how to do that.

If you are changing documentation, you can simply inspect your changes locally via:

docker run --rm -it -p 8000:8000 -v ${PWD}:/docs squidfunk/mkdocs-material\n
"},{"location":"CONTRIBUTING/#signed-commits-and-pull-requests","title":"Signed commits and pull requests","text":"

All changes to the develop and master branch must be signed which is enforced via branch protection. This can be achieved by only fast-forwarding signed commits or signing of merge commits by a contributor. Consequently, we appreciate but do not require that commits in PRs are signed.

A general introduction into signing commits can for example be found in the With Blue Ink blog. For details on setting everything up for GitHub, please follow the steps in the Documentation.

Once you have generated your local GPG key, added it to your GitHub account and informed Git about it, you are set up to create signed commits. We recommend to configure Git to sign commits by default via:

git config commit.gpgsign true\n
This avoids forgetting to use the -S flag when committing changes. In case it happens anyways, you can always rebase to sign earlier commits:
git rebase -i master\n
You can then mark all commits that need to be signed as edit and sign them without any other changes via:
git commit -S --amend --no-edit\n
Finally, you force push to overwrite the unsigned commits via git push -f.

"},{"location":"CONTRIBUTING/#semantic-and-conventional-commits","title":"Semantic and conventional commits","text":"

For Connaisseur, we want to use semantic and conventional commits to ensure good readability of code changes. A good introduction to the topic can be found in this blog post.

Commit messages should consist of header, body and footer. Such a commit message takes the following form:

git commit -m \"<header>\" -m \"<body>\" -m \"<footer>\"\n
The three parts should consist of the following:

We want to use the following common types in the header:

A complete commit message could therefore look as follows:

git commit -m \"fix: extend registry validation regex to custom ports\" -m \"The current regex used for validation of the image name does not allow using non-default ports for the image repository name. The regex is extended to optionally provide a port number.\" -m \"Fix #3\"\n

"},{"location":"CONTRIBUTING/#enjoy","title":"Enjoy!","text":"

Please be bold and contribute!

"},{"location":"RELEASING/","title":"Releasing","text":"

Releasing a new version of Connaisseur includes the following steps:

"},{"location":"RELEASING/#check-readiness","title":"Check readiness","text":"

Before starting the release, make sure everything is ready and in order:

"},{"location":"RELEASING/#add-new-tag","title":"Add new tag","text":"

Before adding the new tag, make sure the Connaisseur version is updated in the charts/connaisseur/Chart.yaml and applies the semantic versioning guidelines: fixes increment PATCH version, non-breaking features increment MINOR version, breaking features increment MAJOR version. Then add the tag (on develop branch) with git tag v<new-connaisseur-version> (e.g. git tag v1.4.6).

"},{"location":"RELEASING/#create-changelog","title":"Create changelog","text":"

A changelog text, including all new commits from one to another version, can be automatically generated using the scrips/changelogger.py script. Run python scripts/changelogger.py > CHANGELOG.md to get the changelog between the two latest tags. If you want to have a diff between certain commits, you have to set the two ref1 and ref2 variables. If you e.g. want to get the changelog from v1.4.5 to 09fd2379cf2374ba9fdc8a84e56d959a176f1569, then you have to run python scripts/changelogger.py --ref1=\"v1.4.5\" --ref2=\"09fd2379cf2374ba9fdc8a84e56d959a176f1569\"> CHANGELOG.md, storing the changelog in a new file CHANGELOG.md (we won't keep this file, it's just for convenient storing purpose). This file will include all new commits, categorized by their type (e.g. fix, feat, docs, etc.), but may include some mistakes so take a manual look if everything looks in order.

Things to look out for:

"},{"location":"RELEASING/#create-pr","title":"Create PR","text":"

Create a PR from develop to master, putting the changelog text as description and wait for someone to approve it.

"},{"location":"RELEASING/#push-new-connaisseur-image","title":"Push new Connaisseur image","text":"

When the PR is approved and ready to be merged, first push the new Connaisseur image to Docker Hub, as it will be used in the release pipeline. Run make docker to build the new version of the docker image and then DOCKER_CONTENT_TRUST=1 docker image push securesystemsengineering/connaisseur:<new-version> to push and sign it. You'll obviously need the right private key and passphrase for doing so. You also need to be in the list of valid signers for Connaisseur. If not already (you can check with docker trust inspect securesystemsengineering/connaisseur --pretty) you'll need to contact Philipp Belitz.

"},{"location":"RELEASING/#merge-pr","title":"Merge PR","text":"

Run git checkout master to switch to the master branch and then run git merge develop to merge develop in. Then run git push origin master --tags to publish all changes and the new tag.

"},{"location":"RELEASING/#create-release-page","title":"Create release page","text":"

Finally a release on GitHub should be created. Go to the Connaisseur releases page, then click Draft a new release. There you have to enter the new tag version, a title (usually Version <new-version>) and the changelog text as description. Then click Publish release and you're done! (You can delete the CHANGELOG.md file now. Go and do it.)

"},{"location":"RELEASING/#check-released-artifacts","title":"Check released artifacts","text":"

To ensure the release worked as intended, check the following artifacts are present:

"},{"location":"RELEASING/#shoot-trouble","title":"Shoot trouble","text":"

Be aware that this isn't a completely fleshed out, highly available, hyper scalable and fully automated workflow, backed up by state-of-the-art blockchain technology and 24/7 incident response team coverage with global dominance! Not yet at least. For now things will probably break, so make sure that in the end everything looks to be in order and the new release can be seen on the GitHub page, tagged with Latest release and pointing to the correct version of Connaisseur. Good Luck!

For breaking changes, the upgrade integration test will fail (as intended), blocking the automatic release. In that case, you can manually trigger the publish job with the expected Connaisseur version.

"},{"location":"SECURITY/","title":"Security Policy","text":""},{"location":"SECURITY/#supported-versions","title":"Supported versions","text":"

While all known vulnerabilities in the Connaisseur application are listed below and we intent to fix vulnerabilities as soon as we become aware, both, Python and OS packages of the Connaisseur image may become vulnerable over time and we suggest to frequently update to the latest version of Connaisseur or rebuilding the image from source yourself. At present, we only support the latest version. We stick to semantic versioning, so unless the major version changes, updating Conaisseur should never break your installation.

"},{"location":"SECURITY/#known-vulnerabilities","title":"Known vulnerabilities","text":"Title Affected versions Fixed version Description initContainers not validated \u2264 1.3.0 1.3.1 Prior to version 1.3.1 Connaisseur did not validate initContainers which allowed deploying unverified images to the cluster. Ephemeral containers not validated \u2264 3.1.1 3.2.0 Prior to version 3.2.0 Connaisseur did not validate ephemeral containers (introduced in k8s 1.25) which allowed deploying unverified images to the cluster. Regex Denial of Service for Notary delegations \u2264 3.3.0 3.3.1 Prior to version 3.3.1 Connaisseur did input validation on the names of delegations in an unsafe manner: An adversary with the ability to alter Notary responses, in particular an evil Notary server, could have provided Connaisseur with an invalid delegation name that would lead to catastrophic backtracking during a regex matching. Only users of type notaryv1 validators are affected as Connaisseur will only perform this kind of input validation in the context of a Notary validation. If you mistrust the Docker Notary server, the default configuration is vulnerable as it contains a notaryv1 validator with the root keys of both Connaisseur and the library of official Docker images."},{"location":"SECURITY/#reporting-a-vulnerability","title":"Reporting a vulnerability","text":"

We are very grateful for reports on vulnerabilities discovered in the project, specifically as it is intended to increase security for the community. We aim to investigate and fix these as soon as possible. Please submit vulnerabilities to connaisseur@securesystems.dev.

"},{"location":"basics/","title":"Basics","text":"

In the following, we aim to lay the foundation on Connaisseur's core concepts, how to configure and administer it.

"},{"location":"basics/#admission-control-validators-and-image-policy","title":"Admission control, validators and image policy","text":"

Connaisseur works as a mutating admission controller. It intercepts all CREATE and UPDATE resource requests for Pods, Deployments, ReplicationControllers, ReplicaSets, DaemonSets, StatefulSets, Jobs, and CronJobs and extracts all image references for validation.

Per default, Connaisseur uses automatic child approval by which the child of a Kubernetes resource is automatically admitted without re-verification of the signature in order to avoid duplicate validation and handle inconsistencies with the image policy. Essentially, this is done since an image that is deployed as part of an already deployed object (e.g. a Pod deployed as a child of a Deployment) has already been validated and potentially mutated during admission of the parent. More information and configuration options can be found in the feature documentation for automatic child approval.

Validation itself relies on two core concepts: image policy and validators. A validator is a set of configuration options required for validation like the type of signature, public key to use for verification, path to signature data, or authentication. The image policy defines a set of rules which maps different images to those validators. This is done via glob matching of the image name which for example allows to use different validators for different registries, repositories, images or even tags. This is specifically useful when using public or external images from other entities like Docker's official images or different keys in a more complex development team.

Note

Typically, the public key of a known entity is used to validate the signature over an image's content in order to ensure integrity and provenance. However, other ways to implement such trust pinning exist and as a consequence we refer to all types of trust anchors in a generalized form as trust roots.

"},{"location":"basics/#using-connaisseur","title":"Using Connaisseur","text":"

Some general administration tasks like deployment or uninstallation when using Connaisseur are described in this section.

"},{"location":"basics/#requirements","title":"Requirements","text":"

Using Connaisseur requires a Kubernetes cluster, Helm and, if installing from source, Git to be installed and set up.

"},{"location":"basics/#get-the-codechart","title":"Get the code/chart","text":"

Download the Connaisseur resources required for installation either by cloning the source code via Git or directly add the chart repository via Helm.

Git repoHelm chart

The Connaisseur source code can be cloned directly from GitHub and includes the application and Helm charts in a single repository:

git clone https://github.com/sse-secure-systems/connaisseur.git\n

The Helm chart can be added by:

helm repo add connaisseur https://sse-secure-systems.github.io/connaisseur/charts\n
"},{"location":"basics/#configure","title":"Configure","text":"

The configuration of Connaisseur is completely done in the charts/connaisseur/values.yaml. The upper kubernetes section offers some general Kubernetes typical configurations like image version or resources. Noteworthy configurations are:

The actual configuration consists of the application.validators and image application.policy sections. These are described in detail below and for initials steps it is instructive to follow the getting started guide. Other features are described on the respective pages.

Connaisseur ships with a pre-configuration that does not need any adjustments for testing. However, validating your own images requires additional configuration.

"},{"location":"basics/#deploy","title":"Deploy","text":"

Install Connaisseur via Helm or Kubernetes manifests:

Git repoHelm chartKubernetes manifests

Install Connaisseur by using the Helm template definition files in the helm directory:

helm install connaisseur helm --atomic --create-namespace --namespace connaisseur\n

Install Connaisseur using the default configuration from the chart repository:

helm install connaisseur connaisseur/connaisseur --atomic --create-namespace --namespace connaisseur\n

To customize Connaisseur, craft a values.yaml according to your needs and apply:

helm install connaisseur connaisseur/connaisseur --atomic --create-namespace --namespace connaisseur -f values.yaml\n

Installing Conaisseur via Kubernetes manifests requires to first render the respecitve resources. If the repo was cloned, simply render templates via:

helm template helm -n connaisseur > deploy.yaml\n
Next, the admission controller is deployed step-wise:

  1. Create target namespace:
    kubectl create namespace connaisseur\n
  2. Setup the preliminary hook:
    kubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-init' -n connaisseur\n
  3. Deploy core resources
    kubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-core' -n connaisseur\n
  4. Arm the webhook
    kubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-webhook' -n connaisseur\n

This deploys Connaisseur to its own namespace called connaisseur. The installation itself may take a moment, as the installation order of the Connaisseur components is critical: The admission webhook for intercepting requests can only be applied when the Connaisseur pods are up and ready to receive admission requests.

"},{"location":"basics/#check","title":"Check","text":"

Once everything is installed, you can check whether all the pods are up by running kubectl get all -n connaisseur:

kubectl get all -n connaisseur\n
Output
NAME                                          READY   STATUS    RESTARTS   AGE\npod/connaisseur-deployment-78d8975596-42tkw   1/1     Running   0          22s\npod/connaisseur-deployment-78d8975596-5c4c6   1/1     Running   0          22s\npod/connaisseur-deployment-78d8975596-kvrj6   1/1     Running   0          22s\n\nNAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\nservice/connaisseur-svc   ClusterIP   10.108.220.34   <none>        443/TCP   22s\n\nNAME                                     READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment.apps/connaisseur-deployment   3/3     3            3           22s\n\nNAME                                                DESIRED   CURRENT   READY   AGE\nreplicaset.apps/connaisseur-deployment-78d8975596   3         3         3       22s\n
"},{"location":"basics/#use","title":"Use","text":"

To use Connaisseur, simply try running some images or apply a deployment. In case you use the pre-configuration, you could for example run the following commands:

kubectl run demo --image=docker.io/securesystemsengineering/testimage:unsigned\n> Error from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request (...).\n\nkubectl run demo --image=docker.io/securesystemsengineering/testimage:signed\n> pod/demo created\n
"},{"location":"basics/#upgrade","title":"Upgrade","text":"

A running Connaisseur instance can be updated by a Helm upgrade of the current release:

Git repoHelm chartKubernetes manifests

Adjust configuration in charts/connaisseur/values.yaml as required and upgrade via:

helm upgrade connaisseur helm -n connaisseur --wait\n

Adjust your local configuration file (e.g. values.yaml) as required and upgrade via:

helm upgrade connaisseur connaisseur/connaisseur -n connaisseur --wait -f values.yaml\n

Adjust your local Kubernetes manifests (e.g. deploy.yaml) as required and upgrade via delete and reinstall:

kubectl delete -f deploy.yaml -n connaisseur\nkubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-init' -n connaisseur\nkubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-core' -n connaisseur\nkubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-webhook' -n connaisseur\n

Note

Rolling upgrades as with Helm might also be possible, but likely require further configuration. Insights are welcome

"},{"location":"basics/#delete","title":"Delete","text":"

Just like for installation, Helm can also be used to delete Connaisseur from your cluster:

Git repoHelm chartKubernetes manifests

Uninstall via Helm:

helm uninstall connaisseur -n connaisseur\n

Uninstall via Helm:

helm uninstall connaisseur -n connaisseur\n

Delete via manifests:

kubectl delete -f deploy.yaml -n connaisseur\n

In case uninstallation fails or problems occur during subsequent installation, you can manually remove all resources:

kubectl delete all,mutatingwebhookconfigurations,clusterroles,clusterrolebindings,configmaps,imagepolicies,secrets,serviceaccounts,customresourcedefinitions -lapp.kubernetes.io/instance=connaisseur\nkubectl delete namespaces connaisseur\n

Connaisseur for example also installs a CutstomResourceDefinition imagepolicies.connaisseur.policy that validates its configuration. In case of major releases, the configuration structure might change which can cause installation to fail and you might have to delete it manually.

"},{"location":"basics/#makefile","title":"Makefile","text":"

Alternatively to using Helm, you can also run the Makefile for installing, deleting and more. Here the available commands:

"},{"location":"basics/#detailed-configuration","title":"Detailed configuration","text":"

All configuration is done in the charts/connaisseur/values.yaml. The configuration of features is only described in the corresponding section. Any configuration of the actual application is done below the application key, so when below we write validators, this actually corresponds to the application.validators key in the charts/connaisseur/values.yaml.

"},{"location":"basics/#validators","title":"Validators","text":"

The validators are configured in the validators field, which defines a list of validator objects.

A validator defines what kind of signatures are to be expected, how signatures are to be validated, against which trust root and how to access the signature data. For example, images might be signed with Docker Content Trust and reside in a private registry. Thus the validator would need to specify notaryv1 as type, the notary host and the required credentials.

The specific validator type should be chosen based on the use case. A list of supported validator types can be found here. All validators share a similar structure for configuration. For specifics and additional options, please review the dedicated page of the validator type.

There is a special behavior, when a validator or one of the trust roots is named default. In this case, should an image policy rule not specify a validator or trust root to use, the one named default will be used instead. This also means there can only be one validator named default and for the trust roots, there can only be one called default within a single validator.

Connaisseur comes with a few validators pre-configured including one for Docker's official images. The pre-configured validators can be removed. However to avoid Connaisseur failing its own validation in case you remove the securesystemsengineering_official key, make sure to also exclude Connaisseur from validation either via the static allow validator or namespaced validation. The special case of static validators used to simply allow or deny images without verification is described below.

"},{"location":"basics/#configuration-options","title":"Configuration options","text":"

.validators[*] in charts/connaisseur/values.yaml supports the following keys:

Key Default Required Description name - Name of the validator, which is referenced in the image policy. It must consist of lower case alphanumeric characters or '-'. If the name is default, it will be used if no validator is specified. type - Type of the validator, e.g. notaryv1 or cosign, which is dependent on the signing solution in use. trustRoots - List of trust anchors to validate the signatures against. In practice, this is typically a list of public keys. trustRoots[*].name - Name of the trust anchor, which is referenced in the image policy. If the name is default, it will be used if no key is specified. trustRoots[*].key - Value of the trust anchor, most commonly a PEM encoded public key. auth - - Credentials that should be used in case authentication is required for validation. Details are provided on validator-specific pages.

Further configuration fields specific to the validator type are described in the respective section.

"},{"location":"basics/#example","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: default\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: default\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\n        d0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n        -----END PUBLIC KEY-----\n    auth:\n      username: superuser\n      password: lookatmeimjumping\n  - name: myvalidator\n    type: cosign\n    trustRoots:\n    - name: mykey\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEIFXO1w6oj0oI2Fk9SiaNJRKTiO9d\n        ksm6hFczQAq+FDdw0istEdCwcHO61O/0bV+LC8jqFoomA28cT+py6FcSYw==\n        -----END PUBLIC KEY-----\n
"},{"location":"basics/#static-validators","title":"Static validators","text":"

Static validators are a special type of validator that does not validate any signatures. Depending on the approve value being true or false, they either allow or deny all images for which they are specified as validator. This for example allows to implement an allowlist or denylist.

"},{"location":"basics/#configuration-options_1","title":"Configuration options","text":"Key Default Required Description name - Name of the validator, which will be used to reference it in the image policy. type - static; value has to be static for a static validator. approve - true or false to admit or deny all images."},{"location":"basics/#example_1","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: deny\n    type: static\n    approve: false\n
"},{"location":"basics/#image-policy","title":"Image policy","text":"

The image policy is defined in the policy field and acts as a list of rule objects to determine which image should be validated by which validator (and potentially some further configurations).

For each image in the admission request, only a single rule in the image policy will apply: the one with the most specific matching pattern field. This is determined by the following algorithm:

  1. A given image is matched against all rule patterns.
  2. All matching patterns are compared to one another to determine the most specific one (see below). Only two patterns are compared at a time; the more specific one then is compared to the next one and so forth. Specificity is determined as follows:
    1. Patterns are split into components (delimited by \"/\"). The pattern that has a higher number of components wins (is considered more specific).
    2. Should the two patterns that are being compared have equal number of components, the longest common prefix between each pattern component and corresponding image component are calculated (for this purpose, image identifiers are also split into components). The pattern with the longest common prefix in one component, starting from the leftmost, wins.
    3. Should all longest common prefixes of all components between the two compared patterns be equal, the pattern with a longer component, starting from the leftmost, wins.
    4. The rule whose pattern has won all comparisons is considered the most specific rule.
  3. Return the most specific rule.

Should an image match none of the rules, Connaisseur will deny the request and raise an error. This deny per default behavior can be changed via a catch-all rule *:* and for example using the static allow validator in order to admit otherwise unmatched images.

In order to perform the actual validation, Connaisseur will call the validator specified in the selected rule and pass the image name and potential further configuration to it. The reference to validator and exact trust root is resolved in the following way:

  1. The validator with name (validators[*].name) equal to the validator value in the selected rule is chosen. If no validator is specified, the validator with name default is used if it exists.
  2. Of that validator, the trust root (e.g. public key) is chosen whose name (.validators.trustRoots[*].name) matches the policies trust root string (with.trustRoot). If no trust root is specified, the trust root with name default is used if it exists. Specifying \"*\" enables signature verification under any trust root in the validator.

Let's review the pattern and validator matching at a minimal example. We consider the following validator and policy configuration (most fields have been omitted for clarity):

charts/connaisseur/values.yaml
application:\n  validators:\n  - name: default     # validator 1\n    trustRoots:\n    - name: default   # key 1\n      key: |\n        ...\n  - name: myvalidator # validator 2\n    trustRoots:\n    - name: default   # key 2\n      key: |\n        ...\n    - name: mykey     # key 3\n      key: |\n        ...\n\n  policy:\n  - pattern: \"*:*\"                      # rule 1\n  - pattern: \"docker.io/myrepo/*:*\"     # rule 2\n    validator: myvalidator\n  - pattern: \"docker.io/myrepo/myimg:*\" # rule 3\n    validator: myvalidator\n    with:\n      trustRoot: mykey\n

Now deploying the following images we would get the matchings:

Connaisseur ships with a few rules pre-configured. There is two rules that should remain intact in some form in order to not brick the Kubernetes cluster:

"},{"location":"basics/#configuration-options_2","title":"Configuration options","text":"

.policy[*] in charts/connaisseur/values.yaml supports the following keys:

Key Default Required Description pattern - Globbing pattern to match an image name against. validator default - Name of a validator in the validators list. If not provided, the validator with name default is used if it exists. with - - Additional parameters to use for a validator. See more specifics in validator section. with.trustRoot default - Name of a trust root, which is specified within the referenced validator. If not provided, the trust root with name default is used if it exists. Setting this to \"*\" implements a logical or and enables signature verification under any trust root in the validator. with.mode mutate - Mode of operation which specifies whether or not image references should be mutated after successful image validation. If set to mutate, Connaisseur operates mutates image references to include digests. If set to insecureValidateOnly, Connaisseur will not mutate the digests. This leaves the risk of a malicious registry serving a different image under the signed tag."},{"location":"basics/#example_2","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  policy:\n  - pattern: \"*:*\"\n  - pattern: \"docker.io/myrepo/*:*\"\n    validator: myvalidator\n    with:\n      trustRoot: mykey\n  - pattern: \"docker.io/myrepo/deniedimage:*\"\n    validator: deny\n  - pattern: \"docker.io/myrepo/allowedimage:v*\"\n    validator: allow\n    with:\n      mode: insecureValidateOnly\n
"},{"location":"basics/#common-examples","title":"Common examples","text":"

Let's look at some useful examples for the validators and policy configuration. These can serve as a first template beyond the pre-configuration or might just be instructive to understand validators and policies.

We assume your repository is docker.io/myrepo and a public key has been created. In case this repository is private, authentication would have to be added to the respective validator for example via:

charts/connaisseur/values.yaml
    auth:\n      secretName: k8ssecret\n

The Kubernetes secret would have to be created separately according to the validator documentation.

"},{"location":"basics/#case-only-validate-own-images-and-deny-all-others","title":"Case: Only validate own images and deny all others","text":"

This is likely the most common case in simple settings by which only self-built images are used and validated against your own public key:

charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: default\n    type: notaryv1  # or e.g. 'cosign'\n    host: notary.docker.io  # only required in case of notaryv1\n    trustRoots:\n    - name: default\n      key: |  # your public key below\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n        qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n        -----END PUBLIC KEY-----\n  - name: dockerhub_basics\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: securesystemsengineering_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\n        d0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"*:*\"\n  - pattern: \"registry.k8s.io/*:*\"\n    validator: allow\n  - pattern: \"docker.io/securesystemsengineering/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: securesystemsengineering_official\n
"},{"location":"basics/#case-only-validate-own-images-and-deny-all-others-faster","title":"Case: Only validate own images and deny all others (faster)","text":"

This configuration achieves the same as the one above, but is faster as trust data only needs to be requested for images in your repository:

charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: deny\n    type: static\n    approve: false\n  - name: default\n    type: notaryv1  # or e.g. 'cosign'\n    host: notary.docker.io  # only required in case of notaryv1\n    trustRoots:\n    - name: default\n      key: |  # your public key below\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n        qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n        -----END PUBLIC KEY-----\n  - name: dockerhub_basics\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: securesystemsengineering_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\n        d0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"*:*\"\n    validator: deny\n  - pattern: \"docker.io/myrepo/*:*\"\n  - pattern: \"registry.k8s.io/*:*\"\n    validator: allow\n  - pattern: \"docker.io/securesystemsengineering/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: securesystemsengineering_official\n

The *:* rule could also have been omitted as Connaisseur denies unmatched images. However, explicit is better than implicit.

"},{"location":"basics/#case-only-validate-docker-hub-official-images-and-deny-all-others","title":"Case: Only validate Docker Hub official images and deny all others","text":"

In case only validated Docker Hub official images should be admitted to the cluster:

charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: deny\n    type: static\n    approve: false\n  - name: dockerhub_basics\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: docker_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEOXYta5TgdCwXTCnLU09W5T4M4r9f\n        QQrqJuADP6U7g5r9ICgPSmZuRHP/1AYUfOQW3baveKsT969EfELKj1lfCA==\n        -----END PUBLIC KEY-----\n    - name: securesystemsengineering_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\n        d0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"*:*\"\n    validator: deny\n  - pattern: \"docker.io/library/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: docker_official\n  - pattern: \"registry.k8s.io/*:*\"\n    validator: allow\n  - pattern: \"docker.io/securesystemsengineering/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: securesystemsengineering_official\n
"},{"location":"basics/#case-only-validate-docker-hub-official-images-and-allow-all-others","title":"Case: Only validate Docker Hub official images and allow all others","text":"

In case only Docker Hub official images should be validated while all others are simply admitted:

charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: deny\n    type: static\n    approve: false\n  - name: dockerhub_basics\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: docker_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEOXYta5TgdCwXTCnLU09W5T4M4r9f\n        QQrqJuADP6U7g5r9ICgPSmZuRHP/1AYUfOQW3baveKsT969EfELKj1lfCA==\n        -----END PUBLIC KEY-----\n    - name: securesystemsengineering_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\n        d0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"*:*\"\n    validator: allow\n  - pattern: \"docker.io/library/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: docker_official\n  - pattern: \"registry.k8s.io/*:*\"\n    validator: allow\n  - pattern: \"docker.io/securesystemsengineering/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: securesystemsengineering_official\n
"},{"location":"basics/#case-directly-admit-own-images-and-deny-all-others","title":"Case: Directly admit own images and deny all others","text":"

As a matter of fact, Connaisseur can also be used to restrict the allowed registries and repositories without signature validation:

charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: deny\n    type: static\n    approve: false\n\n  policy:\n  - pattern: \"*:*\"\n    validator: deny\n  - pattern: \"docker.io/myrepo/*:*\"\n    validator: allow\n  - pattern: \"registry.k8s.io/*:*\"\n    validator: allow\n  - pattern: \"docker.io/securesystemsengineering/*:*\"\n    validator: allow\n
  1. This is not to be confused with the detection mode feature: In detection mode, Conaisseur service admits all requests to the cluster independent of the validation result while the failure policy only takes effect when the service itself becomes unavailable.\u00a0\u21a9

  2. During the first mutation, Connaisseur converts the image tag to its digests. Read more in the overview of Connaisseur \u21a9

  3. In those cases, consider using security annotations via kubernetes.deployment.annotations or pod security policies kubernetes.deployment.podSecurityPolicy if available.\u00a0\u21a9

"},{"location":"getting_started/","title":"Getting Started","text":"

This guide offers a simple default configuration for setting up Connaisseur using public infrastructure and verifying your first self-signed images. You will learn how to:

  1. Create signing key pairs
  2. Configure Connaisseur
  3. Deploy Connaisseur
  4. Test Connaisseur (and sign images)
  5. Cleanup

In the tutorial, you can choose to use either Notary (V1) via Docker Content Trust (DCT) or Cosign from the sigstore project as a signing solution referred to as DCT and Cosign from here on. Furthermore we will work with public images on Docker Hub as a container registry and a Kubernetes test cluster which might for example be MicroK8s or minikube for local setups. However, feel free to bring your own solutions for registry or cluster and check out our notes on compatibility.

In general, Connaisseur can be fully configured via charts/connaisseur/values.yaml, so feel free to take a look and try for yourself. For more advanced usage in more complex cases (e.g. authentication, multiple registries, signers, validators, additional features), we strongly advise to review the following pages:

In case you need help, feel free to reach out via GitHub Discussions

Info

As more than only public keys can be used to validate integrity and provenance of an image, we refer to these trust anchors in a generalized form as trust roots.

"},{"location":"getting_started/#requirements","title":"Requirements","text":"

You should have a Kubernetes test cluster running. Furthermore, docker, git, helm and kubectl should be installed and usable, i.e. having run docker login and switched to the appropriate kubectl context.

If you want to contribute to Connaisseur, then you will also need a Golang v1.22 installation.

"},{"location":"getting_started/#create-signing-key-pairs","title":"Create signing key pairs","text":"

Before getting started with Connaisseur, we need to create our signing key pair. This obviously depends on the signing solution. Here, we will walk you through it for DCT and Cosign. In case you have previously worked with Docker Content Trust or Cosign before and already possess key pairs, you can skip this step (how to retrieve a previously created DCT key is described here). Otherwise, pick your preferred signing solution below.

In case you are uncertain which solution to go with, you might be better off to start with DCT, as it comes packaged with docker. Cosign on the other hand is somewhat more straightforward to use.

Docker Content TrustCosign

General usage of DCT is described in the docker documentation. Detailed information on all configuration options for Connaisseur is provided in the Notary (V1) validator section. For now, we just need to generate a public-private root key pair via:

docker trust key generate root\n

You will be prompted for a password, the private key is automatically imported and a root.pub file is created in your current folder that contains your public key which should look similar to:

-----BEGIN PUBLIC KEY-----\nrole: root\n\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAELDzXwqie/P66R3gVpFDWMhxOyol5\nYWD/KWnAaEIcJVTYUR+21NJSZz0yL7KLGrv50H9kHai5WWVsVykOZNoZYQ==\n-----END PUBLIC KEY-----\n

We will only need the actual base64 encoded part of the key later.

Usage of Cosign is very well described in the docs. You can download Cosign from its GitHub repository. Detailed information on all configuration options for Connaisseur is provided in the Cosign validator section. For now, we just need to generate a key pair via:

cosign generate-key-pair\n

You will be prompted to set a password, after which a private (cosign.key) and public (cosign.pub) key are created. In the next step, we will need the public key that should look similar to:

-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\nqiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n-----END PUBLIC KEY-----\n
"},{"location":"getting_started/#configure-connaisseur","title":"Configure Connaisseur","text":"

Now, we will need to configure Connaisseur. Let's first clone the repository:

git clone https://github.com/sse-secure-systems/connaisseur.git\ncd connaisseur\n

Connaisseur is configured via charts/connaisseur/values.yaml, so we will start there. We need to set Connaisseur to use our previously created public key for validation. To do so, go to the .application.validators and find the default validator. We need to uncomment the trust root with name default and add our previously created public key. The result should look similar to this:

Docker Content TrustCosign charts/connaisseur/values.yaml
# the `default` validator is used if no validator is specified in image policy\n- name: default\n  type: notaryv1  # or other supported validator (e.g. \"cosign\")\n  host: notary.docker.io # configure the notary server to be used\n  trustRoots:\n  # the `default` key is used if no key is specified in image policy\n  - name: default\n    key: |  # enter your key below\n      -----BEGIN PUBLIC KEY-----\n      MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAELDzXwqie/P66R3gVpFDWMhxOyol5\n      YWD/KWnAaEIcJVTYUR+21NJSZz0yL7KLGrv50H9kHai5WWVsVykOZNoZYQ==\n      -----END PUBLIC KEY-----\n  #cert: |  # in case the trust data host is using a self-signed certificate\n  #  -----BEGIN CERTIFICATE-----\n  #  ...\n  #  -----END CERTIFICATE-----\n  #auth:  # credentials in case the trust data requires authentication\n  #  # either (preferred solution)\n  #  secretName: mysecret  # reference a k8s secret in the form required by the validator type (check the docs)\n  #  # or (only for notaryv1 validator)\n  #  username: myuser\n  #  password: mypass\n

In addition for Cosign, the type needs to be set to cosign and the host is not required.

charts/connaisseur/values.yaml
# the `default` validator is used if no validator is specified in image policy\n- name: default\n  type: cosign  # or other supported validator (e.g. \"cosign\")\n  trustRoots:\n  # the `default` key is used if no key is specified in image policy\n  - name: default\n    key: |  # enter your key below\n      -----BEGIN PUBLIC KEY-----\n      MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n      qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n      -----END PUBLIC KEY-----\n  #cert: |  # in case the trust data host is using a self-signed certificate\n  #  -----BEGIN CERTIFICATE-----\n  #  ...\n  #  -----END CERTIFICATE-----\n  #auth:  # credentials in case the trust data requires authentication\n  #  # either (preferred solution)\n  #  secretName: mysecret  # reference a k8s secret in the form required by the validator type (check the docs)\n  #  # or (only for notaryv1 validator)\n  #  username: myuser\n  #  password: mypass\n

We have now configured the validator default with trust root default. This will automatically be used if no validator and trust root is specified in the image policy (.application.policy). Per default, Connaisseur's image policy under .application.policy in charts/connaisseur/values.yaml comes with a pattern \"*:*\" that does not specify a validator or trust root and thus all images that do not meet any of the more specific pre-configured patterns will be verified using this validator. Consequently, we leave the rest untouched in this tutorial, but strongly recommend to read the basics to leverage the full potential of Connaisseur.

"},{"location":"getting_started/#deploy-connaisseur","title":"Deploy Connaisseur","text":"

So let's deploy Connaisseur to the cluster:

helm install connaisseur helm --atomic --create-namespace --namespace connaisseur\n

This can take a few minutes. You should be prompted something like:

NAME: connaisseur\nLAST DEPLOYED: Fri Jul  9 20:43:10 2021\nNAMESPACE: connaisseur\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\n

Afterwards, we can check that Connaisseur is running via kubectl get all -n connaisseur which should look similar to:

NAME                                          READY   STATUS    RESTARTS   AGE\npod/connaisseur-deployment-6876c87c8c-txrkj   1/1     Running   0          2m9s\npod/connaisseur-deployment-6876c87c8c-wvr7q   1/1     Running   0          2m9s\npod/connaisseur-deployment-6876c87c8c-rnc7k   1/1     Running   0          2m9s\n\nNAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE\nservice/connaisseur-svc   ClusterIP   10.152.183.166   <none>        443/TCP   2m10s\n\nNAME                                     READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment.apps/connaisseur-deployment   3/3     3            3           2m9s\n\nNAME                                                DESIRED   CURRENT   READY   AGE\nreplicaset.apps/connaisseur-deployment-6876c87c8c   3         3         3       2m9s\n
"},{"location":"getting_started/#test-connaisseur","title":"Test Connaisseur","text":"

Now that we created our key pairs, configured and deployed Connaisseur, the next step is to test our setup. So let's create and push a test image. Feel free to use our simple test Dockerfile under tests/Dockerfile (make sure to set your own IMAGE name):

# Typically, IMAGE=<REGISTRY>/<REPOSITORY-NAME>/<IMAGE-NAME>:<TAG>, like\nIMAGE=docker.io/securesystemsengineering/demo:test\ndocker build -f tests/Dockerfile -t ${IMAGE} .\ndocker push ${IMAGE}\n

In case you have DCT turned on per default via environment variable DOCKER_CONTENT_TRUST=1, you should disable for now during the docker push by adding the --disable-content-trust=true.

If we try to deploy this unsigned image:

kubectl run test --image=${IMAGE}\n

Connaisseur denies the request due to lack of trust data or signed digest, e.g.:

Error from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request: Unable to get root trust data from default.\n# or\nError from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request: No trust data for image \"docker.io/securesystemsengineering/demo:test\".\n# or\nError from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request: could not find signed digest for image \"docker.io/securesystemsengineering/demo:test\" in trust data.\n

So let's sign the image and try again.

Docker Content TrustCosign

In DCT signing works via docker push using the --disable-content-trust flag:

docker push ${IMAGE} --disable-content-trust=false\n

You will be prompted to provide your password and might be asked to set a new repository key. The trust data will then be pushed to the Docker Hub Notary server.

For Cosign, we use the private key file from the first step:

cosign sign --key cosign.key ${IMAGE}\n

You will be asked to enter your password after wich the signature data will be pushed to your repository.

After successful signing, we try again:

kubectl run test --image=${IMAGE}\n

Now, the request is admitted to the cluster and Kubernetes returns:

pod/test created\n

You did it you just verified your first signed images in your Kuberenetes cluster

Read on to learn how to fully configure Connaisseur

"},{"location":"getting_started/#cleanup","title":"Cleanup","text":"

To uninstall Connaisseur, use:

helm uninstall connaisseur --namespace connaisseur\n

Uninstallation can take a moment as Connaisseur needs to validate the deletion webhook.

"},{"location":"migrating_to_version_3/","title":"Migrate to Connaisseur version 3.0","text":"

It's been a while since our last major update, but it is time again Connaisseur version 3.0 is out and brings along many new features, but also breaking changes For those breaking changes, we've set up a script that migrates your existing Connaisseur configuration. Read on for the list of most interesting changes.

"},{"location":"migrating_to_version_3/#how-to-migrate","title":"How to migrate","text":"
  1. Read the Major changes and API changes sections to get an overview of what's new
  2. Run python3 scripts/upgrade_to_version_3.py
  3. Check the diff of your helm/values.yaml and make sure everything is as expected
  4. Run make upgrade or alternatively helm upgrade connaisseur helm -n <your-namespace> --wait
  5. Enjoy the new version :crossed_fingers:
"},{"location":"migrating_to_version_3/#major-changes","title":"Major changes","text":""},{"location":"migrating_to_version_3/#minor-changes","title":"Minor changes","text":""},{"location":"migrating_to_version_3/#api-changes","title":"API changes","text":"

Here's the list of changes we made to the Helm values.yaml:

"},{"location":"threat_model/","title":"Threat Model","text":"

DEPRECATION WARNING

The threat model is OUTDATED and does not reflect the currently applicable architecture. It still serves as a general guidance to relevant threats.

The STRIDE threat model has been used as a reference for threat modeling. Each of the STRIDE threats were matched to all entities relevant to Connaisseur, including Connaisseur itself. A description of how a threat on an entity manifests itself is given as well as a possible counter measure.

images created by monkik from Noun Project

"},{"location":"threat_model/#1-developeruser","title":"(1) Developer/User","text":"Threat Description Counter Measure Spoofing A developer could be tricked into signing a malicious image, which subsequently will be accepted by Connaisseur. Security Awareness: Developers need to be aware of these attacks, so they can spot any attempts. Elevation of privilege An attacker could acquire the credentials of a developer or trick her into performing malicious actions, hence elevating their privileges to those of the victim. Depending on the victim's privileges, other attacks may be mounted. RBAC & Security Awareness: With proper Role-Based Access Control (RBAC), the effects of compromising an individual's account would help limit its impact and may mitigate the privilege escalation, of course depending on the victim's access level. Other than that, a security awareness training for developers can help minimize the chances of losing critical credentials."},{"location":"threat_model/#2-connaisseur-service","title":"(2) Connaisseur service","text":"Threat Description Counter Measure Spoofing An attacker could stop the original Connaisseur service and start their own version, to take over the admission controller's responsibilities. That way, the functionality of Connaisseur could be completely disabled or altered at will. RBAC: By only permitting a carefully selected group of people to start and stop services in the Connaisseur namespace, such attacks can be prevented. Tampering Given an attacker has access to the Connaisseur container, she could tamper with its source code, leading to forged responses or full compromise. The attacker could also stop the the original Connaisseur process and handle incoming requests some other way, which would be similar to the spoofing threat, but from inside the Connaisseur container. RBAC + Monitoring: Access to the inside of the container can be restricted with RBAC, so an attacker never gets there in the first place. In case the attacker already is inside the container, there are specific monitoring tools (e.g. falco), which are able to register changes inside containers and notify you, should Connaisseur be compromised. Tampering An attacker could modify Connaisseur's image policy to bypass signature verification and inject malicious images. Alternatively, the public root key could be replaced, allowing fake trust data to pass as legit. Lastly, the admission controller could be simply deactivated by deleting the webhook. RBAC + Monitoring: An RBAC system can prevent unauthorized changes to both the image policy and public root key. Additionally, the Connaisseur readiness probe checks the availability of the webhook and will be set to Not Ready should the webhook not be present. Monitoring should still be used to keep track of the admission controller's webhook availability status, as setting up a fake connaisseur-bootstrap-sentinel pod in the connaisseur namespace can bypass that readiness probe check. More on that in an upcoming architectural decision record. Denial of service When sending an extraordinary amount of requests to Connaisseur or triggering unexpected behavior, Connaisseur might become unresponsive or crash. As a result, image signatures can't be verified. Failure Policy: The webhook that is connected to Connaisseur denies all request automatically, should the Connaisseur service be unavailable. Thus, malicious images cannot enter the cluster. Additionally, multiple instances of Connaisseur can be run for better load balancing. Elevation of privilege Since Connaisseur interacts with the Kubernetes API, an attacker located inside the Connaisseur container can act on its behalf and use its permissions. RBAC: Per default, the Connaisseur service account only has read permissions to a few non-critical objects."},{"location":"threat_model/#3-notary-server","title":"(3) Notary server","text":"Threat Description Counter Measure Spoofing An attacker could mount a Monster-in-the-Middle attack between Notary and the Connaisseur service and act as a fake Notary, sending back false trust data. TLS: A TLS connection between Connaisseur and Notary ensures the Notary server's authenticity. Tampering With full control over the Notary server, the stored trust data can be manipulated to include digests of malicious images. Signatures: Changing the trust data would invalidate the signatures and thus fail the image verification. Additionally, the keys needed to create valid signatures are not stored in Notary, but offline on client side. Information disclosure As Notary is responsible for creating the snapshot and timestamp signatures, an attacker could steal those private keys, and create valid snapshot and timestamp signatures. Key rotation: The snapshot and timestamp keys can easily be rotated and changed frequently. The more cirtical root and target key are not stored on server side. Denial of service An extraordinary amount of requests to the Notary server could bring it down so that the Connaisseur service has no more trust data available to work with. Health Probe: Connaisseur's readiness and liveness probes check the Notary server's health every few seconds. Should Notary be unavailable, Connaisseur will switch into a not-ready state. As a consequence, the failure policy will automatically deny all requests."},{"location":"threat_model/#4-registry","title":"(4) Registry","text":"Threat Description Counter Measure Spoofing An attacker could mount a Monster-in-the-Middle attack between the registry and the Kubernetes cluster and act as a fake registry, sending back malicious images. TLS: A TLS connection between the Kubernetes cluster and the registry ensures that the registry is authentic. Tampering With full control over the registry, an attacker may introduce malicious images or change the layers of existing ones and thus inject malicious content. Image Digests: Introducing new images does not work as Connaisseur selects them by digest. An attacker would have to change the content of the corresponding digest layer, while the changes need to produce the same digest. Such a hash collision is considered practically impossible. If digests differ, the docker daemon underlying the cluster will deny the image. Denial of service An extraordinary amount of requests to the registry could bring it down, so that no images can be pulled from it. Out of scope: This threat is specific to registries, not Connaisseur."},{"location":"adr/","title":"Architecture Decision Records","text":"

We strive to make decisions taken during the devlopment of Connaisseur transparent, whenever they may seem weird or unintuitive towards someone new to the project.

Hence, when encountering a problem that took either considerable time to find a solution for or that spawned a lot of discussion, be it internal or from the community, the decision with the factors leading up to the particular choice should be documented. Additionally, we should make clear what other options were under consideration and why they were discarded to help both with making the decision comprehensible to people not involved at the time but also to not repeat discussions at a later point in time.

Since each Architecture Decision may be slightly different, the format is not completely set in stone. However, you should give at least title, status, some context, decisions taken and options discarded and some reasoning as to why one option was deemed better than the others.

"},{"location":"adr/ADR-1_bootstrap-sentinel/","title":"ADR 1: Bootstrap Sentinel","text":""},{"location":"adr/ADR-1_bootstrap-sentinel/#status","title":"Status","text":"

Amended in ADR-3. Deprecated as of ADR-5.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#context","title":"Context","text":"

Connaisseur's main components are a MutatingWebhookConfiguration and the Connaisseur Pods. The MutatingWebhookConfiguration intercepts requests to create or update Kubernetes resources and forwards them to the Connaisseur Pods tasked, on a high level, with verifying trust data. The order of deploying both components matters, since a blocking MutatingWebhookConfiguration without the Connaisseur Pods to answer its requests would also block the deployment of said Pods.

In #3 it was noted that prior to version 1.1.5 of Connaisseur when looking at the Ready status of Connaisseur Pods, they could report Ready while being non-functional due to the MutatingWebhookConfiguration missing. However, as stated above the MutatingWebhookConfiguration can only be deployed after the Connaisseur Pods, which was solved by checking the Ready state of said Pods. If one were to add a dependency to this Ready state, such that it only shows Ready when the MutatingWebhookConfiguration exists, we run into a deadlock, where the MutatingWebhookConfiguration waits for the Pods and the Pods wait for the MutatingWebhookConfiguration.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#considered-options","title":"Considered options","text":""},{"location":"adr/ADR-1_bootstrap-sentinel/#option-1","title":"Option 1","text":"

At the start of the Helm deployment, one can create a Pod named connaisseur-bootstrap-sentinel that will run for 5 minutes (which is also the installation timeout by Helm). Connaisseur Pods will report Ready if they can 1) access notary AND 2) the MutatingWebhookConfiguration exists OR 3) the connaisseur-bootstrap-sentinel Pod is still running. If 1) AND 2) both hold true, the sentinel is killed even if the 5 minutes have not passed yet.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#option-2","title":"Option 2","text":"

Let Connaisseur's Pod readiness stay non-indicative of Connaisseur functioning and advertise that someone running Connaisseur has to monitor the MutatingWebhookConfiguration in order to ensure proper working.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#option-3","title":"Option 3","text":"

Deploy MutatingWebhookConfiguration through Helm when Connaisseur Pods are healthy instead of when ready. Require Pod started and working notary connection for health and require additionally the existence of the MutatingWebhookConfiguration for readiness.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#decision-outcome","title":"Decision outcome","text":"

We chose option 1 over option 2, because it was important to us that a brief glance at Connaisseur's Namespace allows one to judge whether it is running properly. Option 3 was not chosen as the readiness status of Pods can be easily seen from the Service, whereas the health status would require querying every single Pod individually. We deemed that to be a very ugly, non-kubernetes-y solution and hence decided against it.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#positive-consequences","title":"Positive consequences","text":"

If the Connaisseur Pods report Ready during the connaisseur-bootstrap-sentinel's runtime, the MutatingWebhookConfiguration will be deployed by Helm. Otherwise, the Helm deployment will fail after its timeout period (default: 5min), since there won't be a running connaisseur-bootstrap-sentinel Pod anymore that resolves the installation deadlock. The Connaisseur Pods will never reach the Ready state and the MutatingWebhookConfiguration never gets deployed. This means, we get consistent deployment failures after the inital waiting period if something did not work out. Additionally, if the MutatingWebhookConfiguration gets removed for whatever reason during operation, Connaisseur Pods will be failing, indicating their failed dependency. Hence, monitoring the Connaisseur Pods is sufficient to ensure their working.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#negative-consequences","title":"Negative consequences","text":"

On the other hand, if an adversary can deploy a Pod named connaisseur-bootstrap-sentinel to Connaisseur's Namespace, the Connaisseur Pods will always show Ready regardless of the MutatingWebhookConfiguration. However, if an adversary can deploy to Connaisseur's Namespace, chances are Connaisseur can be compromised anyways. More importantly, if not a single Connaisseur Pod is successfully deployed or if the notary healthcheck fails during the sentinel's lifetime, then the deployment will fail regardless of possible recovery at a later time. Another issue would be the connaisseur-bootstrap-sentinel Pod being left behind, however since it has a very limited use case we can also clean it up during the deployment, so apart from the minimal additional complexity of the deployment this is a non-issue.

"},{"location":"adr/ADR-2_release-management/","title":"ADR 2: Release Management","text":""},{"location":"adr/ADR-2_release-management/#status","title":"Status","text":"

Proposed

"},{"location":"adr/ADR-2_release-management/#context","title":"Context","text":"

During its initial development Connaisseur was more or less maintained by a single person and not released frequently. Hence, the easiest option was to just have the maintainer build and push at certain stages of development. With the influx of more team members, the number of contributions and hence the number of needed/reasonable releases went up. Also since publication, it is more important that the uploaded Connaisseur image corresponds to the most recent version referenced in the Helm chart.

A single person having to build, sign and push the images whenever a new pull request is accepted is hence unpractical for both development and agility.

"},{"location":"adr/ADR-2_release-management/#considered-options","title":"Considered options","text":""},{"location":"adr/ADR-2_release-management/#choice-1","title":"Choice 1","text":"

What branches to maintain

"},{"location":"adr/ADR-2_release-management/#option-1","title":"Option 1","text":"

Continue with PRs from personal feature branches to master.

"},{"location":"adr/ADR-2_release-management/#option-2","title":"Option 2","text":"

Have a development branch against which to create pull requests (during usual development, hotfixes may be different).

Sub-options: - a develop (or similar) branch that will exist continuously - a v.1.5.0_dev (or similar) branch for each respective version

"},{"location":"adr/ADR-2_release-management/#choice-2","title":"Choice 2","text":"

Where to sign the images

"},{"location":"adr/ADR-2_release-management/#option-1_1","title":"Option 1","text":"

Have the pipeline build, sign and push the images.

"},{"location":"adr/ADR-2_release-management/#option-2_1","title":"Option 2","text":"

Have a maintainer build, sign and push the images.

"},{"location":"adr/ADR-2_release-management/#decision-outcome","title":"Decision outcome","text":"

For choice 1, we decided to go for two branches. On the one hand, master being the branch that contains the code of the latest release and will be tagged with release versions. On the other hand, there will be a develop branch that hosts the current state of development and will be merged to master whenever we want to create a new release.

This way we get rid of the current pain of releasing with every pull request at the cost a some overhead during release.

In the process of automating most of the release process, we will run an integration test with locally built images for pull requests to master. Regarding choice 2, whenever a pull request is merged, whoever merged the PR has to tag this commit on the master branch with the most recent version. Right after the merge, whoever merged the PR builds, signs and pushes the new Connaisseur release and creates a tag on the master branch referencing the new release version.

After the image is pushed and the new commit tagged, the pipeline will run the integration test with the image pulled from Docker Hub to ensure that the released version is working.

We decided for this option as it does not expose credentials to GitHub Actions, which we wanted to avoid especially in light of the recent GitHub Actions injection attacks and as it would also prevent us from opening up the repository to Pull Requests. To alleviate the work required for doing the steps outside the pipeline we use a shell script that will automate these steps given suitable environment, i.e. Docker context and DCT keys.

"},{"location":"adr/ADR-2_release-management/#positive-consequences","title":"Positive consequences","text":""},{"location":"adr/ADR-2_release-management/#negative-consequences","title":"Negative consequences","text":""},{"location":"adr/ADR-3_multi_notary_config/","title":"ADR 3: Multiple Notary Configuration","text":""},{"location":"adr/ADR-3_multi_notary_config/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-3_multi_notary_config/#context","title":"Context","text":"

Previously Connaisseur only supported the configuration of a single notary, where all signature data had to reside in. Unfortunately this is rather impractical, as one doesn't create all signatures for all images one uses in a cluster. There is a need to access signature data from multiple places, like in a setup where most images come from a private registry + notary and some from Docker Hub and their notary.

There is also the problem that a single notary instance could use multiple root keys, used for creating the signatures, like in the case of Docker Hub. Connaisseur also only supports a single root key to be trust pinned, thus making it impractical.

That's why the decision was made to support more than one notary and multiple keys per notary, which leads to the question how the new configuration should look like. This also has implications on the notary health check, which is important for Connaisseur's own readiness check.

"},{"location":"adr/ADR-3_multi_notary_config/#considered-options","title":"Considered options","text":""},{"location":"adr/ADR-3_multi_notary_config/#choice-1","title":"Choice 1","text":"

The overall notary configuration setup in charts/connaisseur/values.yaml.

"},{"location":"adr/ADR-3_multi_notary_config/#option-1-per-notary","title":"Option 1 (Per Notary)","text":"

The notary field becomes a list and changes to notaries. Per to be used notary instance, there will be one entry in this list.

The entry will have the following data fields (bold are mandatory):

The image policy will have two additional fields per rule entry (in \"quotes\" are already present fields):

"},{"location":"adr/ADR-3_multi_notary_config/#option-2-per-notary-key","title":"Option 2 (Per Notary + Key)","text":"

The notary field becomes a list and changes to notaries. Per notary + public root key combination, there is one entry. Meaning, for example, there will be one entry for Docker Hub and the public key for all official images and there will be another entry for Docker Hub and the public key for some private images.

The entries will look identical to the one's from option 1, with two exceptions.

  1. The pub_root_keys field of the notary configurations won't be a list and only has a single entry, without needing to specify a key name.

  2. The image policy will only address the notary configuration to be chosen with the notary field, without the need for a key field.

"},{"location":"adr/ADR-3_multi_notary_config/#choice-2","title":"Choice 2","text":"

Default values for notary (and key) inside the image policy.

"},{"location":"adr/ADR-3_multi_notary_config/#option-1-first-item","title":"Option 1 (First item)","text":"

When no notary is specified in a image policy rule, the first entry in the notaries configuration list is taken. The same goes for the public root key list, should option 1 for choice 1 be chosen.

Problem: Might get inconsistent, should list ordering in python get shuffled around

"},{"location":"adr/ADR-3_multi_notary_config/#option-2-explicit-default","title":"Option 2 (Explicit default)","text":"

One of the notary configuration will be given a default field, which marks it as the default value.

Problem: No real problems here, just an extra field that the user has to care about.

"},{"location":"adr/ADR-3_multi_notary_config/#option-3-mandatory-notary","title":"Option 3 (Mandatory Notary)","text":"

The notary (and potentially key) field is mandatory for the image policy.

Problem: Creates configuration overhead if many image policies use the same notary/key combination.

"},{"location":"adr/ADR-3_multi_notary_config/#option-4-default-name","title":"Option 4 (Default name)","text":"

If no notary or key are given in the image policy, it is assumed that one of the elements in the notary list or key list has name: \"default\", which will then be taken. Should the assumption be wrong, an error is raised.

"},{"location":"adr/ADR-3_multi_notary_config/#choice-3","title":"Choice 3","text":"

Previously, the readiness probe for connaisseur also considered the notary's health for its own status. With multiple notary instances configured, this behavior changes.

"},{"location":"adr/ADR-3_multi_notary_config/#option-1-ignore-notary","title":"Option 1 (Ignore Notary)","text":"

The readiness probe of Connaisseur will no longer be dependent on any notary health checks. The are completely decoupled.

Problem: No knowledge that Connaisseur will automatically fail because of an unreachable notary, before one tries to deploy an image.

"},{"location":"adr/ADR-3_multi_notary_config/#option-2-health-check-on-all","title":"Option 2 (Health check on all)","text":"

In order for connaisseur to be ready, all configured notaries must be healthy and reachable.

Problem: A single unreachable notary will \"disable\" Connaisseur's access to all others.

"},{"location":"adr/ADR-3_multi_notary_config/#option-3-log-notary-status","title":"Option 3 (Log Notary status)","text":"

A mix of option 1 and 2, whereas the readiness of Connaisseur is independent of the notaries health check, but they are still being made, so unhealthy notaries can be logged.

Problem: At what interval should be logged?

"},{"location":"adr/ADR-3_multi_notary_config/#decision-outcome","title":"Decision outcome","text":""},{"location":"adr/ADR-3_multi_notary_config/#choice-1_1","title":"Choice 1","text":"

Option 1 was chosen, to keep configurational duplication at a minimum.

"},{"location":"adr/ADR-3_multi_notary_config/#choice-2_1","title":"Choice 2","text":"

Option 4 was chosen. If more than one notary configuration or key within a configuration are present, one of those can be called \"default\" (setting the name field). That way it should be obvious enough, which configuration or key will be used, if not further specified within the image policy, while keeping configuration effort low.

"},{"location":"adr/ADR-3_multi_notary_config/#choice-3_1","title":"Choice 3","text":"

Option 3 was chosen. Notary and Connaisseur will be completely decoupled, with Connaisseur logging all notaries it can't reach. This way Connaisseur can still be operational, even with all notaries being unreachable. Otherwise Connaisseur would have blocked even images that were allowlisted. This is a breaking change, but we agreed that it is better as it allows e.g. deployments for which the respective image policy specifies verify: false.

"},{"location":"adr/ADR-4_modular/","title":"ADR 4: Modular Validation","text":""},{"location":"adr/ADR-4_modular/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-4_modular/#context","title":"Context","text":"

With the upcoming of notaryv2 and similar projects like Cosign the opportunity for Connaisseur arises to support multiple signing mechanisms, and combine all into a single validation tool. For that to work, the internal validation mechanism of connaisseur needs to be more modular, so we can easily swap in and out different methods.

"},{"location":"adr/ADR-4_modular/#considered-options","title":"Considered options","text":""},{"location":"adr/ADR-4_modular/#configuration-changes-choice-1","title":"Configuration changes (Choice 1)","text":"

Obviously some changes have to be made to the configuration of Connaisseur, but this splits into changes for the previous notary configurations and the image policy.

"},{"location":"adr/ADR-4_modular/#notary-configuration-11","title":"\"Notary\" configuration (1.1)","text":"

With notaryv1 all trust data always resided in a notary server for which Connaisseur needed the URL, authentication credentials, etc. This isn't true anymore for notaryv2 or Cosign. Here Connaisseur may need other data, meaning the configuration is dependent on the type of validation method used here. Also other mechanisms such as digest whitelisting which doesn't even include cryptographic material might be considered in the future.

"},{"location":"adr/ADR-4_modular/#111-structure","title":"1.1.1 Structure","text":""},{"location":"adr/ADR-4_modular/#option-1111","title":"Option 1.1.1.1","text":"

The previous notaries section in the values.yaml changes to validators, in which different validation methods (validators) can be defined. The least required fields a validator needs are a name for later referencing and a type for knowing its correct kind.

validators:\n- name: \"dockerhub-nv2\"\n  type: \"notaryv2\"\n  ...\n- name: \"harbor-nv1\"\n  type: \"notaryv1\"\n  host: \"notary.harbor.io\"\n  root_keys:\n    - name: \"default\"\n      key: \"...\"\n- name: \"cosign\"\n  type: \"cosign\"\n  ...\n

Depending on the type, additional fields might be required, e.g. the notaryv1 type requires a host and root_keys field.

NB: JSON schema validation works for the above and can easily handle various configurations based on type in there.

"},{"location":"adr/ADR-4_modular/#decision","title":"Decision","text":"

We are going with this structure (option 1.1.1.1) due to the lack of other alternatives. It provides all needed information and the flexibility to use multiple validation methods, as needed.

"},{"location":"adr/ADR-4_modular/#112-sensitive-values","title":"1.1.2 Sensitive values","text":"

If we allow multiple validators that may contain different forms of sensitive values, i.e. notary credentials, symmetric keys, service principals, ..., they need to be properly handled within the Helm chart with respect to ConfigMaps and Secrets. Currently, the distinction is hard-coded.

"},{"location":"adr/ADR-4_modular/#option-1121","title":"Option 1.1.2.1","text":"

Add an optional sensitive([-_]fields) field at the validator config top level. Any sensitive values go in there and will be handled by the Helm chart to go into a secret. Any other values are treated as public and go into the ConfigMap.

Advantages: - Generic configuration - Could be used by potential plugin validators to have their data properly handled (potential future) - Hard to forget the configuration for newly implemented validators

Disadvantage: If implemented in a config = merge(secret, configmap) way, might allow sensitive values in configmap and Connaisseur still working

"},{"location":"adr/ADR-4_modular/#option-1122","title":"Option 1.1.2.2","text":"

Hard-code sensitive values based on validator type

Advantages: Can do very strict validation on fields without extra work

Disadvantages: - Helm chart change might be forgotten for new validator - Helm chart release required for new validator - Does not \"natively\" allow plugins

"},{"location":"adr/ADR-4_modular/#decision_1","title":"Decision","text":"

We are going with option 1.1.2.2 and hard code the sensitive fields, to prevent users from misconfigure and accidentally but sensitive parts into configmaps.

"},{"location":"adr/ADR-4_modular/#image-policy-12","title":"Image policy (1.2)","text":"

For the image policy similar changes to the notary configuration have to be made.

"},{"location":"adr/ADR-4_modular/#proposition","title":"Proposition","text":"

The previous notary field in the image policy will be changed to validator, referencing a name field of one item in the validators list. Any additional fields, e.g. required delegation roles for a notaryv1 validator will be given in a with field. This will look similar to this:

policy:\n- pattern: \"docker.harbor.io/*:*\"\n  validator: \"harbor-nv1\"\n  with:\n    key: \"default\"\n    delegations:\n    - lou\n    - max\n- pattern: \"docker.io/*:*\"\n  validator: \"dockerhub-nv2\"\n
"},{"location":"adr/ADR-4_modular/#option-1211","title":"Option 1.2.1.1","text":"

Besides the self configured validator, two additional validators will be available: allow and deny. The allow validator will allow any image and the deny validator will deny anything.

Advantages: More powerful than verify flag, i.e. has explicit deny option.

Disadvantages: More config changes for users

"},{"location":"adr/ADR-4_modular/#option-1212","title":"Option 1.2.1.2","text":"

Stick with current verify flag.

Advantages: Config known for current users

Disadvantages: No explicit deny option

"},{"location":"adr/ADR-4_modular/#decision_2","title":"Decision","text":"

We are going with option 1.2.1.1, as we don't have to use additional fields and offer more powerful configuration options.

"},{"location":"adr/ADR-4_modular/#option-1221","title":"Option 1.2.2.1","text":"

When no validator given, default to deny validator.

Advantages: Easy

Disadvantages: Not explicit

"},{"location":"adr/ADR-4_modular/#option-1222","title":"Option 1.2.2.2","text":"

Require validator in policy config.

Advantages: Explicit configuration, no accidental denying images

Disadvantages: ?

"},{"location":"adr/ADR-4_modular/#decision_3","title":"Decision","text":"

We are going with option 1.2.2.1 as it reduces configurational effort and is consistent with the key selection behavior.

"},{"location":"adr/ADR-4_modular/#option-1231","title":"Option 1.2.3.1","text":"

The validators from option 1.2.1.1 (allow and deny) will be purely internal, and additional validator can not be named \"allow\" or \"deny\".

Advantages: Less configurational effort

Disadvantage: A bit obscure for users

"},{"location":"adr/ADR-4_modular/#option-1232","title":"Option 1.2.3.2","text":"

The allow and deny validator will be added to the default configuration as type: static with an extra argument (name up for discussion) that specifies whether everything should be denied or allowed. E.g.:

validators:\n- name: allow\n  type: static\n  approve: true\n- name: deny\n  type: static\n  approve: false\n- ...\n

Advantages: No obscurity, if user don't need these they can delete them.

Disadvantage: Bigger config file ...?

"},{"location":"adr/ADR-4_modular/#decision_4","title":"Decision","text":"

We are going with option 1.2.3.2 as we favor less obscurity over the \"bigger\" configurational \"effort\".

"},{"location":"adr/ADR-4_modular/#validator-interface-choice-2","title":"Validator interface (Choice 2)","text":"

See validator interface

Should validation return JSON patch or digest?

"},{"location":"adr/ADR-4_modular/#option-211","title":"Option 2.1.1","text":"

Validator.validate creates a JSON patch for the k8s request. Hence, different validators might make changes in addition to transforming tag to digest.

Advantages: More flexibility in the future

Disadvantages: We open the door to changes that are not core to Connaisseur functionality

"},{"location":"adr/ADR-4_modular/#option-212","title":"Option 2.1.2","text":"

Validator.validate returns a digest and Connaisseur uses the digest in a \"standardized\" way to create a JSON patch for the k8s request.

Advantage: No code duplication and we stay with core feature of translating input data to trusted digest

Disadvantages: Allowing additional changes would require additional work if we wanted to allow them in the future

"},{"location":"adr/ADR-4_modular/#decision_5","title":"Decision","text":"

We are going with option 2.1.2 as all current and upcoming validation methods return a digest.

"},{"location":"adr/ADR-5_no-more-bootstrap/","title":"ADR 5: No More Bootstrap Pods","text":""},{"location":"adr/ADR-5_no-more-bootstrap/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-5_no-more-bootstrap/#context","title":"Context","text":"

Installing Connaisseur isn't as simple as one might think. There is more to it then just applying some yaml files, all due to the nature of being an admission controller, which might block itself in various ways. This ADR depicts some issues during installation of Connaisseur and shows solutions, that try make the process simpler and easier to understand.

"},{"location":"adr/ADR-5_no-more-bootstrap/#problem-1-installation-order","title":"Problem 1 - Installation order","text":"

Connaisseur's installation order is fairly critical. The webhook responsible for intercepting all requests is dependent on the Connaisseur pods and can only work, if those pods are available and ready. If not and FailurePolicy is set to Fail, the webhook will block anything and everything, including the Connaisseur pods themselves. This means, the webhook must be installed after the Connaisseur pods are ready. This was previously solved using the post-install Helm hook, which installs the webhook configuration after all other resources have been applied and are considered ready. Just for installation purposes, this solution suffices. A downside of this is, every resource installed via a Helm hook isn't natively considered to be part of the chart, meaning a helm uninstall would completely ignore those resources and leave the webhook configuration in place. Then the situation of everything and anything being blocked arises again. Additionally, upgrading won't be possible, since you can't tell Helm to temporarily delete resources and then reapply them. That's why the helm-hook image and bootstrap-sentinel where introduced. They were used to temporarily delete the webhook and reapply it before and after installations, in order to beat the race conditions. Unfortunately, this solution always felt a bit clunky and added a lot of complexity for a seemingly simple problem.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-11-empty-webhook-as-part-of-helm-release","title":"Solution 1.1 - Empty webhook as part of Helm release","text":"

The bootstrap sentinel and helm-hook image won't be used anymore. Instead, an empty webhook configuration (a configuration without any rules) will be applied along all other resources during the normal Helm installation phase. This way the webhook can be normally deleted with the helm uninstall command. Additionally, during the post-install (and post-upgrade/post-rollback) Helm hook, the webhook will be updated so it can actually intercept incoming request. So in a sense an unloaded webhook gets installed, which then gets \"armed\" during post-install. This also works during an upgrade, since the now \"armed\" webhook will be overwritten by the empty one when trying to apply the chart again! This will obviously be reverted back again after upgrading, with a post-upgrade Helm hook.

Pros: Less clunky and more k8s native. Cons: Connaisseur will be deactivated for a short time during upgrading.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-12-bootstrap-sentinel-and-helm-hook","title":"Solution 1.2 - Bootstrap Sentinel and Helm hook","text":"

Everything stays as is! The Helm hook image is still used to (un)install the webhook, while the bootstrap sentinel is there to mark the Connaisseur pods as ready for initial installation.

Pros: Never change a running system. Cons: Clunky, at times confusing for anyone not familiar with the Connaisseur installation order problem, inactive webhook during upgrade.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-13-uninstallation-of-webhook-during-helm-hooks","title":"Solution 1.3 - (Un)installation of webhook during Helm hooks","text":"

The webhook can be easily installed during the post-install step of the Helm installation, but then it isn't part of the Helm release and can't be uninstalled, as mentioned above. With a neat little trick this is still possible: in the post-delete step the webhook can be reapplied in an empty (\"unarmed\") form, while setting the hook-delete-policy to delete the resource in either way (no matter if the Helm hook step fails or not). So in a way the webhook gets reapplied and then immediately deleted. This still works with upgrading Connaisseur if a rolling update strategy is pursued, meaning the old pods will still be available for admitting the new ones, while with more and more new pods being ready, the old ones get deleted.

Pros: Less clunky and more k8s native, no inactivity of the webhook during upgrade. Cons: Slower upgrade of Connaisseur compared to solution 1.

"},{"location":"adr/ADR-5_no-more-bootstrap/#decision-outcome-1","title":"Decision outcome (1)","text":"

Solution 1.3 was chosen, as it is the more Kubernetes native way of doing things and Connaisseur will be always available, even during its own upgrade.

"},{"location":"adr/ADR-5_no-more-bootstrap/#problem-2","title":"Problem 2","text":"

All admission webhooks must use TLS for communication purposes or they won't be accepted by Kubernetes. That is why Connaisseur creates its own self signed certificate, which it uses for communication between the webhook and its pods. This certificate is created within the Helm chart, using the native genSelfSignedCert function, which makes Connaisseur pipeline friendly as there is no need for additional package installation such as OpenSSL. Unfortunately, this certificate gets created every time Helm is used, whether that being a helm install or helm upgrade. Especially during an upgrade, the webhook will get a new certificate, while the pods will get their new one written into a secret. The problem is that the pods will only capitalize on the new certificate inside the secret once they are restarted. If no restart happens, the pods and webhook will have different certificates and any validation will fail.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-21-lookup","title":"Solution 2.1 - Lookup","text":"

Instead of always generating a new certificate, the lookup function for Helm templates could be used to see whether there already is a secret defined that contains a certificate and then use this one. This way the same certificate is reused the whole time so no pod restarts are necessary. Should there be no secret with certificate to begin with, a new one can be generated within the Helm chart.

Pros: No need for restarts and changing of TLS certificates. Cons: The lookup function takes some time to gather the current certs.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-22-restart","title":"Solution 2.2 - Restart","text":"

On each upgrade of the Helm release, all pods will be restarted so they incorporate the new TLS secrets.

Pros: - Cons: Restarting takes time and may break if too many Connaisseur pods are unavailable at the same time.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-23-external-tls","title":"Solution 2.3 - External TLS","text":"

Go back to using an external TLS certificate which is not being generated within the Helm chart, but by pre-configuring it or using OpenSSL.

Pros: Fastest solution. Cons: More configurational effort and/or not pipeline friendly (may need OpenSSL).

"},{"location":"adr/ADR-5_no-more-bootstrap/#decision-outcome-2","title":"Decision outcome (2)","text":"

Solution 2.1 is being implemented, as it is important that Connaisseur works with as little configuration effort as possible from the get-go. Nonetheless an external configuration of TLS certificates is still considered for later development.

--

"},{"location":"adr/ADR-6_dynamic-config/","title":"ADR 6: Dynamic Configuration","text":""},{"location":"adr/ADR-6_dynamic-config/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-6_dynamic-config/#context","title":"Context","text":"

The configuration of validators are mounted into Connaisseur as a configmap, as it is common practice in the Kubernetes ecosystem. When this configmap is upgraded, say with a helm upgrade, the resource itself in Kubernetes is updated accordingly, but that doesn't mean it's automatically updated inside the pods which mounted it. That only occurs once the pods are restarted and until they are the pods still have an old version of the configuration lingering around. This is a fairly unintuitive behavior and the reason why Connaisseur doesn't mount the image policy into its pods. Instead, the pods have access to the kube API and get the image policy dynamically from there. The same could be done for the validator configuration, but there is also another solution.

"},{"location":"adr/ADR-6_dynamic-config/#problem-1-access-to-configuration","title":"Problem 1 - Access to configuration","text":"

How should Connaisseur get access to its configuration files?

"},{"location":"adr/ADR-6_dynamic-config/#solution-11-dynamic-access","title":"Solution 1.1 - Dynamic access","text":"

This is the same solution as currently employed for the image policy configuration. The validators will get their own CustomResourceDefinition and Connaisseur gets access to this resource via RBAC so it can use the kube API to read the configuration.

Pros: Pods don't need to be restarted and the configuration can be changed \"on the fly\", without using Helm. Cons: Not a very Kubernetes native approach and Connaisseur must always do some network requests to access its config.

"},{"location":"adr/ADR-6_dynamic-config/#solution-12-restart-pods","title":"Solution 1.2 - Restart pods","text":"

The other solution would be to use ConfigMaps for validators and image policy and then restart the pods, once there were changes in the configurations. This can be achieved by setting the hash of the config files as annotations into the deployment. If there are changes in the configuration, the hash will change and thus a new deployment will be rolled out as it has a new annotation. This corresponds to the suggestion made by Helm.

Pros: Kubernetes native and no more CustomResourceDefinitions! Cons: No more \"on the fly\" changes.

"},{"location":"adr/ADR-6_dynamic-config/#decision-outcome-1","title":"Decision Outcome (1)","text":"

Solution 1.2 was chosen, going with the more Kubernetes native way.

"},{"location":"adr/ADR-6_dynamic-config/#problem-2-how-many-configmaps-are-too-many","title":"Problem 2 - How many configmaps are too many?","text":"

When both the image policy and validator configurations are either CustomResourceDefinitions or ConfigMaps, is there still a need to separate them or can they be merged into one file?

"},{"location":"adr/ADR-6_dynamic-config/#solution-21-2-concerns-2-resources","title":"Solution 2.1 - 2 concerns, 2 resources","text":"

There will be 2 resources, one for the image policy and one for the validators.

"},{"location":"adr/ADR-6_dynamic-config/#solution-22-one-to-rule-them-all","title":"Solution 2.2 - One to rule them all","text":"

One Ring to rule them all, One Ring to find them, One Ring to bring them all and in the darkness bind them.

"},{"location":"adr/ADR-6_dynamic-config/#decision-outcome-2","title":"Decision Outcome (2)","text":"

Solution 2.2 was chosen as it is the more simpler of the two.

"},{"location":"adr/ADR-7_wsgi-server/","title":"ADR 7: WSGI Server","text":""},{"location":"adr/ADR-7_wsgi-server/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-7_wsgi-server/#context","title":"Context","text":"

We were running the Flask WSGI application with the built-in Flask server, which is not meant for production. Problems are mainly due to potential debug shell on the server and single thread in default configuration. Both were mitigated in our setup, but we decided to test a proper WSGI server at some point. Especially the log entry

 * Serving Flask app 'connaisseur.flask_server' (lazy loading)\n * Environment: production\n   WARNING: This is a development server. Do not use it in a production deployment.\n   Use a production WSGI server instead.\n
did cause anguish among users, see e.g. issue 11.

"},{"location":"adr/ADR-7_wsgi-server/#considered-options","title":"Considered options","text":""},{"location":"adr/ADR-7_wsgi-server/#choice-1-wsgi-server","title":"Choice 1: WSGI server","text":"

There's plenty of WSGI server around and the question poses itself, which one to pick. Flask itself has a list of servers, there's comparisons around, for example here and here. The choice, which WSGI servers to test was somewhat arbitrary among better performing ones in the posts.

Contenders were Bjoern, Cheroot, Flask, Gunicorn and uWSGI. Bjoern was immediately dropped, since it worked only with Python2. Later, during testing Bjoern did support Python3, but no TLS, so we stuck to dropping it. Gunicorn was tested for a bit, but since it delivered worse results than the others and it requires a writable worker-tmp-dir directory, it was also dropped from contention.

The remaining three were tested over a rather long time of development, i.e. from before the first bit of validation parallelization to after the 2.0 release. All tests were run on local minikube/kind clusters with rather constrained resources in the expectation that this will still provide reasonable insight into the servers' behavior on regular production clusters.

"},{"location":"adr/ADR-7_wsgi-server/#test-results","title":"Test results","text":"

Since the results span a longer timeframe and at least at first performed to find some way to distinguish the servers instead of having a clear plan, some tests feature a different configuration. If not specified different Cheroot was run with default configuration (minimum number of threads 10, no maximum limit), Flask in its default configuration and uWSGI with 2 processes and 1 thread (low because it already has a bigger footprint when idle to begin with). Connaisseur itself was configured with its default of 3 pods.

"},{"location":"adr/ADR-7_wsgi-server/#integration-test","title":"Integration test","text":""},{"location":"adr/ADR-7_wsgi-server/#before-parallelization","title":"Before parallelization","text":"

Before paralellization was ever implemented, there were tests running the integration test on the cluster and seeing how often the test failed.

The error rate across 50 executions was 8% (4/50) for Cheroot, 22% (11/50) for Flask and 12% (6/50) for uWSGI. These error rates could be as high because the non-parallelized fetching of notary trust data regularly took around 25 seconds with a maximum timeout of 30 seconds.

"},{"location":"adr/ADR-7_wsgi-server/#with-simple-parallelization","title":"With simple parallelization","text":"

After parallelization (of fetching base trust data) was added, the tests were rerun. This time all 50 checks for all servers were run together with randomized order of servers for each of the 50 test runs.

Error rates were 4% (2/50) for Cheroot and 6% (3/50) for uWSGI. Flask was not tested.

"},{"location":"adr/ADR-7_wsgi-server/#stress-tests","title":"Stress tests","text":""},{"location":"adr/ADR-7_wsgi-server/#complex-requests","title":"Complex requests","text":"

There was a test setup with complex individual requests containing multiple different initContainers and containers or many instantiations of a particular image.

The test was performed using kubectl apply -f loadtest.yaml on the below file.

loadtest.yaml
\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: redis-with-many-instances\n  labels:\n    app: redis\n    loadtest: loadtest\nspec:\n  selector:\n    matchLabels:\n      app: redis\n  replicas: 1000\n  template:\n    metadata:\n      labels:\n        app: redis\n    spec:\n      containers:\n      - name: redis\n        image: redis\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n  name: pod-with-many-containers\n  labels:\n    loadtest: loadtest\nspec:\n  containers:\n  - name: container1\n    image: busybox\n    command: ['sh', '-c', 'sleep 3600']\n  - name: container2\n    image: redis\n  - name: container3\n    image: node\n  - name: container4\n    image: nginx\n  - name: container5\n    image: rabbitmq\n  - name: container6\n    image: elasticsearch\n  - name: container7\n    image: sonarqube\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n  name: pod-with-many-containers-and-init-containers\n  labels:\n    loadtest: loadtest\nspec:\n  containers:\n  - name: container1\n    image: busybox\n    command: ['sh', '-c', 'sleep 3600']\n  - name: container2\n    image: redis\n  - name: container3\n    image: node\n  - name: container4\n    image: nginx\n  - name: container5\n    image: rabbitmq\n  - name: container6\n    image: elasticsearch\n  - name: container7\n    image: sonarqube\n  initContainers:\n  - name: init2\n    image: maven\n  - name: init3\n    image: vault\n  - name: init4\n    image: postgres\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n  name: pod-with-some-containers-and-init-containers\n  labels:\n    loadtest: loadtest\nspec:\n  containers:\n  - name: container1\n    image: busybox\n    command: ['sh', '-c', 'sleep 3600']\n  - name: container2\n    image: redis\n  - name: container3\n    image: node\n  - name: container4\n    image: nginx\n  initContainers:\n  - name: container5\n    image: rabbitmq\n  - name: container6\n    image: elasticsearch\n  - name: container7\n    image: sonarqube\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n  name: pod-with-coinciding-containers-and-init-containers\n  labels:\n    loadtest: loadtest\nspec:\n  containers:\n  - name: container1\n    image: busybox\n    command: ['sh', '-c', 'sleep 3600']\n  - name: container2\n    image: redis\n  - name: container3\n    image: node\n  initContainers:\n  - name: init1\n    image: busybox\n    command: ['sh', '-c', 'sleep 3600']\n  - name: init2\n    image: redis\n  - name: init3\n    image: node\n

None of the servers regularly managed to pass this particular loadtest. However, the pods powered by the Flask server regularly died and had to be restarted, whereas both Cheroot and uWSGI had nearly no restarts and never on all instances. uWSGI seldomly even managed to pass the test.

"},{"location":"adr/ADR-7_wsgi-server/#less-complex-requests-with-some-load","title":"Less complex requests with some load","text":"

Since in the above the most complex request was the bottleneck, we tried an instance of the test with less complexity in the individual requests but more requests instead. However, that led to no real distinguishing behaviour across the servers.

"},{"location":"adr/ADR-7_wsgi-server/#load-test","title":"Load test","text":"

To check the servers behaviour when hit with lots of (easy) requests at the same time, we also implemented an actual load test. We ran parallel --jobs 20 ./testn.sh {1} :::: <(seq 200) and parallel --jobs 50 ./testn.sh {1} :::: <(seq 200) with the below files.

File contents testn.sh
\nnr=$1\n\ntmpf=$(mktemp)\nfilec=$(nr=${nr} envsubst ${tmpf})\n\nkubectl apply -f ${tmpf}\n\nloadtest3.yaml\n
\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: redis-${nr}\n  labels:\n    app: redis\n    loadtest: loadtest\nspec:\n  selector:\n    matchLabels:\n      app: redis\n  replicas: 1\n  template:\n    metadata:\n      labels:\n        app: redis\n    spec:\n      containers:\n      - name: redis\n        image: redis\n
\n\n\n

Afterwards, we checked how many of the pods were actually created.

\n\n\n\nServer\nCreated pods (parallel 20 jobs)\nCreated pods (parallel 50 jobs)\n\n\n\n\nCheroot\n173\n78\n\n\nCheroot (numthreads=40)\n-\n81\n\n\nFlask\n173\n81\n\n\nuWSGI\n49\n-\n\n\nuWSGI (1 process, 10 threads)\n164\n35\n\n\nuWSGI (4 processes, 10 threads)\n146\n135\n\n\nuWSGI (1 process, 40 threads)\n164\n112\n\n\n\n

Interestingly, Flask (narrowly) performs best for this test (for strong load, not for massive load) and for both Cheroot and uWSGI adding further parallelization doesn't necessarily help the stability even when intuitively it should. For 50 jobs in parallel the low creation rate is due to the pods dying at some point during the barrage.

\n

Resource consumption measured via kubectl top pods -n connaisseur during the loadtest:

\n

Shown is representative sample from across multiple invocations only at 20 jobs, since for 50 jobs most often the pods died and metrics API is slow to give accurate information after restart.

\n

Cheroot\n

NAME                                      CPU(cores)   MEMORY(bytes)\nconnaisseur-deployment-644458d686-2tfjp   331m         46Mi\nconnaisseur-deployment-644458d686-kfzdq   209m         44Mi\nconnaisseur-deployment-644458d686-t57lp   321m         53Mi\n

\n

Flask\n

NAME                                      CPU(cores)   MEMORY(bytes)\nconnaisseur-deployment-644458d686-t6c24   381m         42Mi\nconnaisseur-deployment-644458d686-thgzd   328m         42Mi\nconnaisseur-deployment-644458d686-wcprp   235m         38Mi\n

\n

uWSGI (1 process, 10 threads)\n

NAME                                     CPU(cores)   MEMORY(bytes)\nconnaisseur-deployment-d86fbfcd8-9c5m7   129m         63Mi\nconnaisseur-deployment-d86fbfcd8-hv6sp   309m         67Mi\nconnaisseur-deployment-d86fbfcd8-w46dz   298m         67Mi\n

"},{"location":"adr/ADR-7_wsgi-server/#option-11-flask","title":"Option 1.1: Flask","text":"

Staying with the Flask server is obviously an option. It doesn't resolve the problem, but it did us a good service and there's no known problems with its usage in practice.

\n

However, the authors discourage using it:

\n\n

When running publicly rather than in development, you should not use the built-in development server (flask run). The development server is provided by Werkzeug for convenience, but is not designed to be particularly efficient, stable, or secure. source

\n\n

and it performs worst by far for complex requests.

"},{"location":"adr/ADR-7_wsgi-server/#option-12-cheroot","title":"Option 1.2: Cheroot","text":"

Cheroot performs better than Flask for complex requests and better than uWSGI when under strong load. However, when under massive load, even increasing its minimum number of threads doesn't really add a lot to its stability.

\n

In addition, it seems to be less known and not among the servers that the Flask project lists. On the other hand, its memory footprint is better than uWSGI's and almost on par with Flask's, whereas its CPU footprint is on par with uWSGI and slightly better than the one of Flask.

"},{"location":"adr/ADR-7_wsgi-server/#option-12-uwsgi","title":"Option 1.2: uWSGI","text":"

uWSGI (narrowly) has the best showing for complex requests, but performs worst for strong load. However, if trying to deal with a massive load, scaling its resources allows uWSGI to significantly outperform the other options for very massive load.

\n

Its memory footprint is higher than for Cheroot and Flask, but its CPU footprint is on par with Cheroot and slightly better than Flask's.

"},{"location":"adr/ADR-7_wsgi-server/#decision","title":"Decision","text":"

We chose option 1.2 and will for now go forward with Cheroot as the WSGI server. The decision was based on the server performing best in the relevant parts of the stress and load tests.

"},{"location":"adr/ADR-8_go-transition/","title":"ADR 8: Transition to Golang","text":""},{"location":"adr/ADR-8_go-transition/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-8_go-transition/#context","title":"Context","text":"

Connaisseur was orignally written in Python, mostly because of a preference of language in the team. This was completely fine and worked for many years, but over the time it became more apparent that other programming languages might be better suited for the task, namely Golang. The main reasons for this are:

This ADR discusses whether a transition to Golang is worth the effort and how it would play out.

"},{"location":"adr/ADR-8_go-transition/#considered-options","title":"Considered Options","text":""},{"location":"adr/ADR-8_go-transition/#option-1-stay-with-python","title":"Option 1: Stay with Python","text":"

No transition will be made. The Python code base is kept and continuously developed. Resources can be spend on improving the existing code base and adding new features. Adding new signature schemes will be more difficult, as they either have to be implemented in Python, or other workarounds have to be found.

"},{"location":"adr/ADR-8_go-transition/#option-2-transition-to-golang","title":"Option 2: Transition to Golang","text":"

The Python code base is abandoned and a new code base is written in Golang. This will allow for easier integration of new signature schemes and a more secure container image. It will also open up the project to the Kubernetes/Golang community, while shutting down the Python one. The transition will require a lot of work and will take some time.

We transition to Golang, which will require an entirely new code base \ud83d\ude25 This comes with all benefits mentioned above, but also with a lot of work. Additionally, the knowledge of the language in the team is rather limited at the time.

There were some efforts by @phbelitz to transition to Golang, of which the following parts are still missing (compared to the Python version):

Also none of the Golang code was yet reviewed by a second pair of eyes.

"},{"location":"adr/ADR-8_go-transition/#decision-outcome","title":"Decision Outcome","text":"

We develop a Golang version in parallel to continued support of the Python version. The Golang version should not be a breaking change to ensure we can use existing tests to keep confidence in the new version. Once the Golang version is developed, we switch it with the Python version in a feature release.

"},{"location":"adr/ADR-9_multi-pod/","title":"ADR 9: Multi Pod Architecture","text":""},{"location":"adr/ADR-9_multi-pod/#status","title":"Status","text":"

Undecided

"},{"location":"adr/ADR-9_multi-pod/#context","title":"Context","text":"

The core functionality of Connaisseur always has been centered around a standalone pod in which a web server is running and where all validation takes place. There can be multiple pods of Connaisseur, which follows the purpose of redundancy, so that Connaisseur may always be available and that load can be better balanced. Only recently, with the addition of a caching mechanism using an external Redis store, an additional pod was introduced to a core Connaisseur deployment.

The idea of this ADR is to discuss further distribution of functionalities into separate modules, away from the centralized standalone pod approach.

"},{"location":"adr/ADR-9_multi-pod/#considered-options","title":"Considered Options","text":""},{"location":"adr/ADR-9_multi-pod/#option-1-validator-pods","title":"Option 1: Validator Pods","text":""},{"location":"adr/ADR-9_multi-pod/#architecture-idea","title":"Architecture Idea","text":"

The different types of supported validators are split into their own pods, with a centralized management service that coordinates incoming requests to the right validator pods. The validator pods of the same type have their own service, so that multiple pods of the same validator can be run, in case of high load.

The management service will take over the following functionalities:

The validator pods/service will take over the following functionalities:

"},{"location":"adr/ADR-9_multi-pod/#advantages","title":"Advantages","text":""},{"location":"adr/ADR-9_multi-pod/#disadvantages","title":"Disadvantages","text":""},{"location":"adr/ADR-9_multi-pod/#option-2-alerting-pods","title":"Option 2: Alerting Pods","text":"

The alerting functionality is split from the main Connaisseur service, into its own. The management service will contact the alerting service, should alerts need to be sent out. The alerting service will take over the following functionalities:

Similar advantages and disadvantages apply as for option 1.

"},{"location":"adr/ADR-9_multi-pod/#option-3-single-pod","title":"Option 3: Single Pod","text":"

Everything stays as is. One pod for web server+validation and one pod for caching.

"},{"location":"adr/ADR-9_multi-pod/#decision-outcome","title":"Decision Outcome","text":""},{"location":"features/","title":"Overview","text":"

Besides Connaisseur's central functionality, several additional features are available, such as:

In combination, these features help to improve usability and might better support the DevOps workflow. Switching Connaisseur to detection mode and alerting on non-compliant images can for example avoid service interruptions while still benefitting from improved supply-chain security.

Feel free to propose new features that would make Connaisseur an even better experience

"},{"location":"features/alerting/","title":"Alerting","text":"

Connaisseur can send notifications on admission decisions to basically every REST endpoint that accepts JSON payloads.

"},{"location":"features/alerting/#supported-interfaces","title":"Supported interfaces","text":"

Slack, Opsgenie, Keybase and Microsoft Teams have pre-configured payloads that are ready to use. Additionally, there is a template matching the Elastic Common Schema in version 1.12. However, you can use the existing payload templates as an example how to model your own custom one. It is also possible to configure multiple interfaces for receiving alerts at the same time.

"},{"location":"features/alerting/#configuration-options","title":"Configuration options","text":"

Currently, Connaisseur supports alerting on either admittance of images, denial of images or both. These event categories can be configured independently of each other under the relevant category (i.e. admitRequest or rejectRequest):

Key Accepted values Default Required Description alerting.clusterIdentifier string \"not specified\" - Cluster identifier used in alert payload to distinguish between alerts from different clusters. alerting.<category>.receivers.[].template opsgenie, slack, keybase, msteams, ecs-1-12-0 or custom* - File in helm/alert_payload_templates/ to be used as alert payload template. alerting.<category>.receivers.[].receiverUrl string - URL of alert-receiving endpoint. alerting.<category>.receivers.[].priority int 3 - Priority of alert (to enable fitting Connaisseur alerts into alerts from other sources). alerting.<category>.receivers.[].customHeaders list[string] - - Additional headers required by alert-receiving endpoint. alerting.<category>.receivers.[].payloadFields subyaml - - Additional (yaml) key-value pairs to be appended to alert payload (as json). alerting.<category>.receivers.[].failIfAlertSendingFails bool false - Whether to make Connaisseur deny images if the corresponding alert cannot be successfully sent.

*basename of the custom template file in helm/alerting_payload_templates without file extension

Notes:

"},{"location":"features/alerting/#example","title":"Example","text":"

For example, if you would like to receive notifications in Keybase whenever Connaisseur admits a request to your cluster, your alerting configuration would look similar to the following snippet:

charts/connaisseur/values.yaml
alerting:\n  admitRequest:\n    receivers:\n      - template: keybase\n        receiverUrl: https://bots.keybase.io/webhookbot/<Your-Keybase-Hook-Token>\n
"},{"location":"features/alerting/#additional-notes","title":"Additional notes","text":""},{"location":"features/alerting/#creating-a-custom-template","title":"Creating a custom template","text":"

Along the lines of the templates that already exist you can easily define custom templates for other endpoints. The following variables can be rendered during runtime into the payload:

Referring to any of these variables in the templates works by Jinja2 notation (e.g. {{ timestamp }}). You can update your payload dynamically by adding payload fields in yaml representation in the payloadFields key which will be translated to JSON by Helm as is. If your REST endpoint requires particular headers, you can specify them as described above in customHeaders.

Payload fields in action

With payload fields, you can extend the same template depending on the receiver. For example, below Connaisseur's default Opsgenie template is used to send alerts that will be assigned to different users depending on whether the alert is for a successful admission or not. The payloadFields entries will be transformed to their JSON equivalents overwrite the respective entries of the template.

charts/connaisseur/values.yaml
alerting:\n  admitRequest:\n    receivers:\n      - template: opsgenie\n        receiverUrl: https://api.eu.opsgenie.com/v2/alerts\n        priority: 4\n        customHeaders: [\"Authorization: GenieKey ABC-DEF\"]\n        payloadFields:\n          responders:\n            - username: \"someone@testcompany.de\"\n              type: user\n  rejectRequest:\n    receivers:\n      - template: opsgenie\n        receiverUrl: https://api.eu.opsgenie.com/v2/alerts\n        priority: 4\n        customHeaders: [\"Authorization: GenieKey ABC-DEF\"]\n        payloadFields:\n          responders:\n            - username: \"cert@testcompany.de\"\n              type: user\n

The resulting payload sent to the webhook endpoint will then contain the field content:

{\n    ...\n    \"responders\": [{\"type\": \"user\", \"username\": \"cert@testcompany.de\"}],\n    ...\n}\n

Feel free to make a PR to share with the community if you add new neat templates for other third parties

"},{"location":"features/automatic_child_approval/","title":"Automatic Child Approval","text":"

Per default, Connaisseur uses automatic child approval by which the child of a Kubernetes resource is automatically admitted without re-verification of the signature in order to avoid duplicate validation and handle inconsistencies with the image policy. This behavior can be configured or even disabled.

When automatic child approval is enabled, images that are deployed as part of already deployed objects (e.g. a Pod deployed as a child of a Deployment) are already validated and potentially mutated during admission of the parent. In consequence, the images of child resources are directly admitted without re-verification of the signature. This is done as the parent (and thus the child) has already been validated and might have been mutated, which would lead to duplicate validation and could cause image policy pattern mismatch. For example, given a Deployment which contains Pods with image:tag that gets mutated to contain Pods with image@sha256:digest. Then a) the Pod would not need another validation as the image was validated during the admittance of the Deployment and b) if there exists a specific rule with pattern image:tag and another less specific rule with image*, then after mutating the Deployment, the Pod would be falsely validated against image* instead of image:tag. To ensure the child resource is legit in this case, the parent resource is requested via the Kubernetes API and only those images it lists are accepted.

When automatic child approval is disabled, Connaisseur only validates and potentially mutates Pod resources.

There is trade-offs between the two behaviors: With automatic child approval, Connaisseur only verifies that the image reference in a child resource is the same as in the parent. This means that resources deployed prior to Connaisseur will never be validated until they are re-deployed even if a corresponding Pod is restarted. Consequently, a restarting Pod with an expired signature would still be admitted. However, this avoids unexpected failures when restarting Pods, avoids inconsistencies with the image policy and reduces the number of validations and thus the load. Furthermore, disabling automatic child approval also means that deployments with invalid images will be successful even though the Pods are denied.

The extension of the feature (disabling, caching) is currently under development to improve security without compromising on usability.

"},{"location":"features/automatic_child_approval/#configuration-options","title":"Configuration options","text":"

automaticChildApproval in charts/connaisseur/values.yaml under application.features supports the following values:

Key Default Required Description automaticChildApproval true - true or false; when false, Connaisseur will disable automatic child approval"},{"location":"features/automatic_child_approval/#example","title":"Example","text":"

In charts/connaisseur/values.yaml:

application:\n  features:\n    automaticChildApproval: true\n
"},{"location":"features/automatic_child_approval/#additional-notes","title":"Additional notes","text":""},{"location":"features/automatic_child_approval/#caching","title":"Caching","text":"

Connaisseur implements a caching mechanism, which allows bypassing verification for images that were already admitted recently. One might think that this obviates the need for automatic child approval. However, since an image may be mutated during verification, i.e. a tag being replaced with a digest, the child resource image to be validated could be different from the original one and in that case could be governed by a different policy pattern that explicitly denies the specific digest in which case caching would change the outcome, if we cached the validation result for both original and mutated image. As such, caching cannot replace automatic child approval with regards to skipping validations even though they both admit workload objects with images that were \"already admitted\".

"},{"location":"features/automatic_child_approval/#pod-only-validation","title":"Pod-only validation","text":"

If the resource validation mode is set to only validate pods, while automatic child approval is enabled, then the combination becomes an allow-all validator with regards to all workloads except for individual pods. As this is unlikely to be desired, we pretend automatic child approval were disabled if it is enabled in conjunction with a pod-only resource validation mode.

"},{"location":"features/automatic_unchanged_approval/","title":"Automatic Unchanged Approval","text":"

With the feature automatic unchanged approval enabled, Connaisseur automatically approves any resource that is updated and doesn't change its image references. This is especially useful when handling long living resources, with potentially out-of-sync signature data, that still need to be scaled up and down.

An example: When dealing with a deployment that has an image reference image:tag, this reference is updated by Connaisseur during signature validation to image@sha256:123..., to ensure the correct image is used by the deployment. When scaling up or down the deployment, the image reference image@sha256:123... is presented to Connaisseur, due to the updated definition. Over time the signature of the original image:tag may change and a new \"correct\" image is available at image@sha256:456.... If afterwards the deployment in scaled up or down, Connaisseur will try to validate the image reference image@sha256:123..., by looking for it inside the signature data it receives. Unfortunately this reference may no longer be present, due to signature updates and thus the whole scaling will be denied.

With automatic unchanged approval enabled, this is no longer the case. The validation of image@sha256:123... will be skipped, as no different image is used.

This obviously has security implications, since it's no longer guaranteed that resources that are updated, have fresh and up-to-date signatures. So use it with caution. For that reason the feature is also disabled by default. The creation of resources on the other hand remains unchanged and will enforce validation.

"},{"location":"features/automatic_unchanged_approval/#configuration-options","title":"Configuration options","text":"

automaticUnchangedApproval in charts/connaisseur/values.yaml under application.features supports the following values:

Key Default Required Description automaticUnchangedApproval false - true or false ; when true, Connaisseur will enable automatic unchanged approval"},{"location":"features/automatic_unchanged_approval/#example","title":"Example","text":"

In charts/connaisseur/values.yaml:

application:\n  features:\n    automaticUnchangedApproval: true\n
"},{"location":"features/caching/","title":"Caching","text":"

Connaisseur utilizes Redis as a cache. For each image reference the resolved digest or validation error is cached. This drastically boosts the performance of Connaisseur compared to older non-caching variants. The expiration for keys in the cache defaults to 30 seconds, but can be tweaked. If set to 0, no caching will be performed and the cache will not be deployed as part of Connaisseur.

"},{"location":"features/caching/#configuration-options","title":"Configuration options","text":"

cache in charts/connaisseur/values.yaml under application.features supports the following configuration:

Key Default Required Description expirySeconds 30 - Number of seconds for which validation results are cached. If set to 0, the Connaisseur deployment will omit the caching infrastructure in its entirety. cacheErrors true - Whether validation failures are cached. If set to false, Connaisseur will only cache successfully validated image digests instead of also caching errors."},{"location":"features/caching/#example","title":"Example","text":"

In charts/connaisseur/values.yaml:

application:\n  features:\n    cache:\n      expirySeconds: 15\n      cacheErrors: false\n
"},{"location":"features/detection_mode/","title":"Detection Mode","text":"

A detection mode is available in order to avoid interruptions of a running cluster, to support initial rollout or for testing purposes. In detection mode, Connaisseur admits all images to the cluster, but issues a warning1 and logs an error message for images that do not comply with the policy or in case of other unexpected failures:

kubectl run unsigned --image=docker.io/securesystemsengineering/testimage:unsigned\n> Warning: Unable to find signed digest for image docker.io/securesystemsengineering/testimage:unsigned. (not denied due to DETECTION_MODE)\n> pod/unsigned created\n

To activate the detection mode, set the detectionMode flag to true in charts/connaisseur/values.yaml.

"},{"location":"features/detection_mode/#configuration-options","title":"Configuration options","text":"

detectionMode in charts/connaisseur/values.yaml under application.features supports the following values:

Key Default Required Description detectionMode false - true or false; when detection mode is enabled, Connaisseur will warn but not deny requests with untrusted images."},{"location":"features/detection_mode/#example","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  features:\n    detectionMode: true\n
"},{"location":"features/detection_mode/#additional-notes","title":"Additional notes","text":""},{"location":"features/detection_mode/#failure-policy-vs-detection-mode","title":"Failure policy vs. detection mode","text":"

The detection mode is not to be confused with the failure policy (kubernetes.webhook.failurePolicy in charts/connaisseur/values.yaml) for the mutating admission controller: In detection mode, Conaisseur service admits all requests to the cluster independent of the validation result while the failure policy only takes effect when the service itself becomes unavailable. As such, both options are disjoint. While in default configuration, requests will be denied if either no valid image signature exists or the Connaisseur service is unavailable, setting failurePolicy to Ignore and detectionMode to true ensures that Connaisseur never blocks a request.

  1. The feature to send warnings to API clients as shown above was only introduced in Kubernetes v1.19. However, warnings are only surfaced by kubectl in stderr to improve usability. Except for testing purposes, the respective error messages should either be handled via the cluster's log monitoring solution or by making use of Connaisseur's alerting feature.\u00a0\u21a9

"},{"location":"features/metrics/","title":"Metrics","text":"

Connaisseur exposes metrics about usage of the /mutate endpoint and general information about the python process using Prometheus Flask Exporter through the /metrics endpoint.

This for example allows visualizing the number of allowed or denied resource requests.

"},{"location":"features/metrics/#example","title":"Example","text":"
# HELP python_gc_objects_collected_total Objects collected during gc\n# TYPE python_gc_objects_collected_total counter\npython_gc_objects_collected_total{generation=\"0\"} 4422.0\npython_gc_objects_collected_total{generation=\"1\"} 1866.0\npython_gc_objects_collected_total{generation=\"2\"} 0.0\n# HELP python_gc_objects_uncollectable_total Uncollectable object found during GC\n# TYPE python_gc_objects_uncollectable_total counter\npython_gc_objects_uncollectable_total{generation=\"0\"} 0.0\npython_gc_objects_uncollectable_total{generation=\"1\"} 0.0\npython_gc_objects_uncollectable_total{generation=\"2\"} 0.0\n# HELP python_gc_collections_total Number of times this generation was collected\n# TYPE python_gc_collections_total counter\npython_gc_collections_total{generation=\"0\"} 163.0\npython_gc_collections_total{generation=\"1\"} 14.0\npython_gc_collections_total{generation=\"2\"} 1.0\n# HELP python_info Python platform information\n# TYPE python_info gauge\npython_info{implementation=\"CPython\",major=\"3\",minor=\"10\",patchlevel=\"2\",version=\"3.10.2\"} 1.0\n# HELP process_virtual_memory_bytes Virtual memory size in bytes.\n# TYPE process_virtual_memory_bytes gauge\nprocess_virtual_memory_bytes 6.1161472e+07\n# HELP process_resident_memory_bytes Resident memory size in bytes.\n# TYPE process_resident_memory_bytes gauge\nprocess_resident_memory_bytes 4.595712e+07\n# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.\n# TYPE process_start_time_seconds gauge\nprocess_start_time_seconds 1.6436681112e+09\n# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.\n# TYPE process_cpu_seconds_total counter\nprocess_cpu_seconds_total 3.3\n# HELP process_open_fds Number of open file descriptors.\n# TYPE process_open_fds gauge\nprocess_open_fds 12.0\n# HELP process_max_fds Maximum number of open file descriptors.\n# TYPE process_max_fds gauge\nprocess_max_fds 1.048576e+06\n# HELP exporter_info Information about the Prometheus Flask exporter\n# TYPE exporter_info gauge\nexporter_info{version=\"0.18.7\"} 1.0\n# HELP http_request_duration_seconds Flask HTTP request duration in seconds\n# TYPE http_request_duration_seconds histogram\nhttp_request_duration_seconds_bucket{le=\"0.1\",method=\"POST\",path=\"/mutate\",status=\"200\"} 5.0\nhttp_request_duration_seconds_bucket{le=\"0.25\",method=\"POST\",path=\"/mutate\",status=\"200\"} 5.0\nhttp_request_duration_seconds_bucket{le=\"0.5\",method=\"POST\",path=\"/mutate\",status=\"200\"} 5.0\nhttp_request_duration_seconds_bucket{le=\"0.75\",method=\"POST\",path=\"/mutate\",status=\"200\"} 8.0\nhttp_request_duration_seconds_bucket{le=\"1.0\",method=\"POST\",path=\"/mutate\",status=\"200\"} 8.0\nhttp_request_duration_seconds_bucket{le=\"2.5\",method=\"POST\",path=\"/mutate\",status=\"200\"} 9.0\nhttp_request_duration_seconds_bucket{le=\"+Inf\",method=\"POST\",path=\"/mutate\",status=\"200\"} 9.0\nhttp_request_duration_seconds_count{method=\"POST\",path=\"/mutate\",status=\"200\"} 9.0\nhttp_request_duration_seconds_sum{method=\"POST\",path=\"/mutate\",status=\"200\"} 3.6445974350208417\n# HELP http_request_duration_seconds_created Flask HTTP request duration in seconds\n# TYPE http_request_duration_seconds_created gauge\nhttp_request_duration_seconds_created{method=\"POST\",path=\"/mutate\",status=\"200\"} 1.643668194758098e+09\n# HELP http_request_total Total number of HTTP requests\n# TYPE http_request_total counter\nhttp_request_total{method=\"POST\",status=\"200\"} 9.0\n# HELP http_request_created Total number of HTTP requests\n# TYPE http_request_created gauge\nhttp_request_created{method=\"POST\",status=\"200\"} 1.6436681947581613e+09\n# HELP http_request_exceptions_total Total number of HTTP requests which resulted in an exception\n# TYPE http_request_exceptions_total counter\n# HELP mutate_requests_total Total number of mutate requests\n# TYPE mutate_requests_total counter\nmutate_requests_total{allowed=\"False\",status_code=\"403\",warnings=\"False\"} 4.0\nmutate_requests_total{allowed=\"True\",status_code=\"202\",warnings=\"False\"} 5.0\n# HELP mutate_requests_created Total number of mutate requests\n# TYPE mutate_requests_created gauge\nmutate_requests_created{allowed=\"False\",status_code=\"403\"} 1.643760946491879e+09\nmutate_requests_created{allowed=\"True\",status_code=\"202\"} 1.6437609592007663e+09\n
"},{"location":"features/namespaced_validation/","title":"Namespaced Validation","text":"

Warning

Enabling namespaced validation, allows roles with edit permissions on namespaces to disable validation for those namespaces.

Namespaced validation allows restricting validation to specific namespaces. Connaisseur will only verify trust of images deployed to the configured namespaces. This can greatly support initial rollout by stepwise extending the validated namespaces or excluding specific namespaces for which signatures are unfeasible.

Namespaced validation offers two modes:

The desired namespaces must be labelled accordingly, e.g. via:

# either\nkubectl namespaces <namespace> securesystemsengineering.connaisseur/webhook=ignore\n# or\nkubectl namespaces <namespace> securesystemsengineering.connaisseur/webhook=validate\n

Configure namespaced validation via the namespacedValidation in charts/connaisseur/values.yaml under application.features.

"},{"location":"features/namespaced_validation/#configuration-options","title":"Configuration options","text":"

namespacedValidation in charts/connaisseur/values.yaml supports the following keys:

Key Default Required Description mode - - ignore or validate; configure mode of exclusion to either ignore all namespaces with label securesystemsengineering.connaisseur/webhook set to ignore or only validate namespaces with the label set to validate.

If the namespacedValidation key is not set, all namespaces are validated.

"},{"location":"features/namespaced_validation/#example","title":"Example","text":"

In charts/connaisseur/values.yaml:

application:\n  features:\n    namespacedValidation:\n      mode: validate\n

Labelling target namespace to be validated:

kubectl namespaces validateme securesystemsengineering.connaisseur/webhook=validate\n
"},{"location":"features/resource_validation_mode/","title":"Resource Validation Mode","text":"

Resource Validation Mode controls the admission behavior of Connaisseur, blocking only resources that match the configured type.

"},{"location":"features/resource_validation_mode/#configurations-options","title":"Configurations Options","text":"

Resource Validation Mode can take two different values:

Configure resource validation mode via the resourceValidationMode in charts/connaisseur/values.yaml under application.features.

The resourceValidationMode value defaults to all.

"},{"location":"validators/","title":"Overview","text":"

Connaisseur is built to be extendable and currently aims to support the following signing solutions:

Feel free to use any or a combination of all solutions. The integration with Connaisseur is detailed on the following pages. For advantages and disadvantages of each solution, please refer to the respective docs.

"},{"location":"validators/notaryv1/","title":"Notary (V1) / DCT","text":"

Notary (V11) works as an external service holding signatures and trust data of artifacts based on The Update Framework (TUF). Docker Content Trust (DCT) is a client implementation by Docker to manage such trust data for container images like signing images or verifying the corresponding signatures. It is part of the standard Docker CLI (docker) and for example provides the docker trust commands.

Using DCT, the trust data is per default pushed to the Notary server associated to the container registry. However, not every public container registry provides an associated Notary server and thus support for DCT must be checked for the provider in question. Docker Hub for example, runs an associated Notary server (notary.docker.io) and even uses it to serve trust data for the Docker Official Images. In fact, since Connaisseur's pre-built images are shared via the Connaisseur Docker Hub repository, its own trust data is maintained on Docker Hub's Notary server. Besides the public Notary instances, Notary can also be run as a private or even standalone instance. Harbor for example comes along with an associated Notary instance.

Validating a container image via DCT requires a repository's public root key as well as fetching the repository's trust data from the associated Notary server. While DCT relies on trust on first use (TOFU) for repositories' public root keys, Connaisseur enforces manual pinning to a public root key that must be configured in advance.

"},{"location":"validators/notaryv1/#basic-usage","title":"Basic usage","text":"

In order to validate signatures using Notary, you will either need to create signing keys and signed images yourself or extract the public root key of other images and configure Connaisseur via application.validators[*].trustRoots[*].key in charts/connaisseur/values.yaml to pin trust to those keys. Both is described below. However, there is also step-by-step instructions for using Notary in the getting started guide.

"},{"location":"validators/notaryv1/#creating-signing-key-pairs","title":"Creating signing key pairs","text":"

You can either create the root key manually or push an image with DCT enabled upon which docker will guide you to set up the keys as described in the next section. In order to generate a public-private root key pair manually, you can use:

docker trust key generate root\n

You will be prompted for a password, the private key is automatically imported and a root.pub file is created in your current folder that contains your public key which should look similar to:

-----BEGIN PUBLIC KEY-----\nrole: root\n\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAELDzXwqie/P66R3gVpFDWMhxOyol5\nYWD/KWnAaEIcJVTYUR+21NJSZz0yL7KLGrv50H9kHai5WWVsVykOZNoZYQ==\n-----END PUBLIC KEY-----\n

You will only need the actual base64 encoded part for configuring the application.validators[*].trustRoots[*].key in charts/connaisseur/values.yaml of Connaisseur to validate your images. How to extract the public root key for any image is described below.

"},{"location":"validators/notaryv1/#creating-signatures","title":"Creating signatures","text":"

Before you can start validating images using the Notary (V1) validator, you'll first need an image which has been signed using DCT. Easiest way to do this is by pushing an image of your choice (e.g. busybox:stable) to your Docker Hub repository with DCT activated (either set the environment variable DOCKER_CONTENT_TRUST=1 or use the --disable-content-trust=false flag). If you haven't created any signatures for images in the current repository yet, you'll be asked to enter a passphrase for a root key and targets key, which get generated on your machine. Have a look into the TUF documentation to read more about TUF roles and their meanings. If you already have these keys, just enter the required passphrase.

DOCKER_CONTENT_TRUST=1 docker push <your-repo>/busybox:stable\n
Output
The push refers to repository [<your-repo>/busybox]\n5b8c72934dfc: Pushed\nstable: digest: sha256:dca71257cd2e72840a21f0323234bb2e33fea6d949fa0f21c5102146f583486b size: 527\nSigning and pushing trust metadata\nYou are about to create a new root signing key passphrase. This passphrase\nwill be used to protect the most sensitive key in your signing system. Please\nchoose a long, complex passphrase and be careful to keep the password and the\nkey file itself secure and backed up. It is highly recommended that you use a\npassword manager to generate the passphrase and keep it safe. There will be no\nway to recover this key. You can find the key in your config directory.\nEnter passphrase for new root key with ID 5fb3e1e:\nRepeat passphrase for new root key with ID 5fb3e1e:\nEnter passphrase for new repository key with ID 6c2a04c:\nRepeat passphrase for new repository key with ID 6c2a04c:\nFinished initializing \"<your-repo>/busybox\"\n

The freshly generated keys are directly imported to the Docker client. Private keys reside in ~/.docker/trust/private and public trust data is added to ~/.docker/trust/tuf/. The created signature for your image is pushed to the public Docker Hub Notary (notary.docker.io). The private keys and password are required whenever a new version of the image is pushed with DCT activated.

"},{"location":"validators/notaryv1/#getting-the-public-root-key","title":"Getting the public root key","text":"

Signature validation via Connaisseur requires the public root key to verify against as a trust anchor. But from where do you get this, especially for public images whose signatures you didn't create? We have created the get_root_key utility to extract the public root key of images. To use it, either use our pre-built image or build the docker image yourself via docker build -t get-public-root-key -f docker/Dockerfile.getRoot . and run it on the image to be verified:

# pre-built\ndocker run --rm docker.io/securesystemsengineering/get-public-root-key -i securesystemsengineering/testimage\n# or self-built\ndocker run --rm get-public-root-key -i securesystemsengineering/testimage\n
Output
KeyID: 76d211ff8d2317d78ee597dbc43888599d691dbfd073b8226512f0e9848f2508\nKey: -----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\nd0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n-----END PUBLIC KEY-----\n

The -i (--image) option is required and takes the image, for which you want the public key. There is also the -s (--server) option, which defines the Notary server that should be used and which defaults to notary.docker.io.

The public repository root key resides with the signature data in the Notary instance, so what the get_root_key utility does in the background is just fetching, locating and parsing the public repository root key for the given image.

"},{"location":"validators/notaryv1/#configuring-and-running-connaisseur","title":"Configuring and running Connaisseur","text":"

Now that you either created your own keys and signed images or extracted the public key of other images, you will need to configure Connaisseur to use those keys for validation. This is done via application.validators in charts/connaisseur/values.yaml. The corresponding entry should look similar to the following (using the extracted public key as trust root):

charts/connaisseur/values.yaml
- name: customvalidator\n  type: notaryv1\n  host: notary.docker.io\n  trustRoots:\n  - name: default\n    key: |  # THE DESIRED PUBLIC KEY BELOW\n      -----BEGIN PUBLIC KEY-----\n      MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEOXYta5TgdCwXTCnLU09W5T4M4r9f\n      QQrqJuADP6U7g5r9ICgPSmZuRHP/1AYUfOQW3baveKsT969EfELKj1lfCA==\n      -----END PUBLIC KEY-----\n

You also need to create a corresponding entry in the image policy via application.policy, for example:

charts/connaisseur/values.yaml
- pattern: \"docker.io/<REPOSITORY>/<IMAGE>:*\"  # THE DESIRED REPOSITORY\n  validator: customvalidator\n

After installation, you are ready to verify your images against your public key:

helm install connaisseur helm --atomic --create-namespace --namespace connaisseur\n

Connaisseur now rejects all images from the given repository that have not been signed based on the provided public key. A quick guide for installation and testing is available in getting started. It also provides a full step-by-step guide.

"},{"location":"validators/notaryv1/#understanding-validation","title":"Understanding validation","text":"

Using the simple pre-configuration shipped with Connaisseur, it is possible to test validation by deploying some pods:

kubectl run test-signed --image=docker.io/securesystemsengineering/testimage:signed\n> pod/test-signed created\n\nkubectl run test-unsigned --image=docker.io/securesystemsengineering/testimage:unsigned\n> Error from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request: Unable to find signed digest for image docker.io/securesystemsengineering/testimage:unsigned.\n# or in case of a signature with a different key\n> Error from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request: Failed to verify signature of trust data root.\n

How does Connaisseur validate these requests and convert the images with tags to digests? What happens in the background is that Connaisseur looks up trust data of the image in the root, snapshot, timestamp and targets files (in json format) by querying the API of the Notary server. Trust data syntax is validated against their known schemas and the files' signatures are validated against their respective public keys. The pinned root key is used for the root.json file that in turn contains the other keys which can then be trusted for validation of the remaining trust data (snapshot.json, timestamp.json, targets.json). Furthermore, Connaisseur gathers trust data of potential delegations linked in the targets file which can then be used to enforce delegations.

At this point, Connaisseur is left with a validated set of trust data. Connaisseur filters the trust data for consistent signed digests that actually relate to the image under validation. In case exactly one trusted digest remains, Connaisseur modifies the admission request and admits it. Otherwise, admission is rejected.

While it is obvious to reject an image that does not exhibit a trusted digest, there is the special case of multiple trusted digests. This only occurs in some edge cases, but at this point Connaisseur cannot identify the right digest anymore and consequently has to reject.

For more information on TUF roles, please refer to TUF's documentation or checkout this introductory presentation on how the trust data formats work and are validated by Connaisseur.

"},{"location":"validators/notaryv1/#configuration-options","title":"Configuration options","text":"

.application.validators[*] in charts/connaisseur/values.yaml supports the following keys for Notary (V1) (refer to basics for more information on default keys):

Key Default Required Description name - See basics. type - notaryv1; the validator type must be set to notaryv1. host - URL of the Notary instance, in which the signatures reside, e.g. notary.docker.io. trustRoots[*].name - See basics. Setting the name of trust root to \"*\" implements a logical and and enables multiple signature verification under any trust root in the validator. trustRoots[*].key - See basics. TUF public root key. auth - - Authentication credentials for the Notary server in case the trust data is not public. auth.secretName - - (Preferred over username + password combination.) Name of a Kubernetes secret that must exist in Connaisseur namespace beforehand. Create a file secret.yaml containing: username: <user> password: <password> Run kubectl create secret generic <kube-secret-name> --from-file secret.yaml -n connaisseur to create the secret. auth.username - - Username to authenticate with2. auth.password - - Password or access token to authenticate with2. cert - - Self-signed certificate of the Notary instance, if used. Certificate must be supplied in .pem format.

.application.policy[*] in charts/connaisseur/values.yaml supports the following additional keys for Notary (V1) (refer to basics for more information on default keys):

Key Default Required Description with.delegations - - List of delegation names to enforce specific signers to be present. Refer to section on enforcing delegations for more information."},{"location":"validators/notaryv1/#example","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: docker_essentials\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: sse\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n        qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"docker.io/securesystemsengineering/connaisseur:*\"\n    validator: docker_essentials\n    with:\n      key: sse\n      delegations:\n      - belitzphilipp\n      - starkteetje\n
"},{"location":"validators/notaryv1/#additional-notes","title":"Additional notes","text":""},{"location":"validators/notaryv1/#enforcing-delegations","title":"Enforcing delegations","text":"

Notary (V1) offers the functionality to delegate trust. To better understand this feature, it's best to have a basic understanding of the TUF key hierarchy, or more specifically the purpose of the root, targets and delegation keys. If you are more interested in this topic, please read the TUF documentation.

When creating the signatures of your docker images earlier, two keys were generated -- the root key and the targets key. The root key is the root of all trust and will be used whenever a new image repository is created and needs to be signed. It's also used to rotate all other kinds of keys, thus there is usually only one root key present. The targets key is needed for new signatures on one specific image repository, hence every image repository has its own targets key. Hierarchically speaking, the targets keys are below the root key, as the root key can be used to rotate the targets keys should they get compromised.

Delegations will now go one level deeper, meaning they can be used to sign individual image repositories and only need the targets key for rotation purposes, instead of the root key. Also delegation keys are not bound to individual image repositories, so they can be re-used multiple times over different image repositories. So in a sense they can be understood as keys for individual signers.

To create a delegation key run:

docker trust key generate <key-name>\n> Generating key for <key-name>...\n> Enter passphrase for new <key-name> key with ID 9deed25:\n> Repeat passphrase for new <key-name> key with ID 9deed25:\n> Successfully generated and loaded private key. Corresponding public key available: <current-directory>/<key-name>.pub\n

This delegation key now needs to be added as a signer to a respective image repository, like the busybox example above. In doing so, you'll be asked for the targets key.

docker trust signer add --key <key-name>.pub <key-name> <your-repo>/busybox\n> Adding signer \"<key-name>\" to <your-repo>/busybox...\n> Enter passphrase for repository key with ID b0014f8:\n> Successfully added signer: <key-name> to <your-repo>/busybox\n

If you create a new signature for the image, you'll be asked for your delegation key instead of the targets key, therefore creating a signature using the delegation.

DOCKER_CONTENT_TRUST=1 docker push <your-repo>/busybox:stable\n

Without further configuration, Connaisseur will accept all delegation signatures for an image that can ultimately be validated against the public root key. Connaisseur can enforce a certain signer/delegation (or multiple) for an image's signature via the with.delegations list inside an image policy rule. Simply add the signer's name to the list. You can also add multiple signer names to the list in which case Connaisseur will enforce that all delegations must have signed a matching image.

charts/connaisseur/values.yaml
application:\n  policy:\n  - pattern: \"<your-repo>/busybox:*\"\n    with:\n      delegations:\n      - <key-name>\n      - <other-key-name>\n

The delegation feature can be useful in complex organisations where certain people may be required to sign specific critical images. Another use case is to sign an image with delegation keys in various stages of your CI and enforce that certain checks were passed, i.e. enforcing the signature of your linter, your security scanner and your software lisence compliance check.

"},{"location":"validators/notaryv1/#using-azure-container-registry","title":"Using Azure Container Registry","text":"

You need to provide credentials of an Azure Identity having at least read access to the ACR (and, thus, to the associated Notary instance). Assuming you have the az cli installed you can create a Service Principal for this by running:

# Retrieve the ID of your registry\nREGISTRY_ID=$(az acr show --name <ACR-NAME>  --query 'id' -otsv)\n\n# Create a service principal with the Reader role on your registry\naz ad sp create-for-rbac --name \"<SERVICE-PRINCIPLE-NAME>\" --role Reader --scopes ${REGISTRY_ID}\n

Use the resulting applicationID as auth.username, the resulting password as auth.password and set <ACR>.azurecr.io as host in the charts/connaisseur/values.yaml and you're ready to go!

  1. Notary does traditionally not carry the version number. However, in differentiation to the new Notary V2 project we decided to add a careful \"(V1)\" whenever we refer to the original project.\u00a0\u21a9

  2. There is no behavioral difference between configuring a Kubernetes secret or setting the credentials via username or password. In the latter case, a corresponding Kubernetes secret containing these credentials will be created automatically during deployment.\u00a0\u21a9\u21a9

"},{"location":"validators/notaryv2/","title":"Notary V2","text":"

TBD - Notary V2 has not yet been integrated with Connaisseur.

"},{"location":"validators/sigstore_cosign/","title":"sigstore / Cosign","text":"

sigstore is a Linux Foundation project that aims to provide public software signing and transparency to improve open source supply chain security. As part of the sigstore project, Cosign allows seamless container signing, verification and storage. You can read more about it here.

Connaisseur currently supports the elementary function of verifying Cosign-generated signatures based on the following types of keys:

We plan to expose further features of Cosign and sigstore in upcoming releases, so stay tuned!

"},{"location":"validators/sigstore_cosign/#basic-usage","title":"Basic usage","text":"

Getting started with Cosign is very well described in the docs. You can download Cosign from its GitHub repository. In short: After installation, a keypair is generated via:

cosign generate-key-pair\n

You will be prompted to set a password, after which a private (cosign.key) and public (cosign.pub) key are created. You can then use Cosign to sign a container image using:

# Here, ${IMAGE} is REPOSITORY/IMAGE_NAME:TAG\ncosign sign --key cosign.key ${IMAGE}\n

The created signature can be verfied via:

cosign verify --key cosign.pub ${IMAGE}\n

To use Connaisseur with Cosign, configure a validator in charts/connaisseur/values.yaml with the generated public key (cosign.pub) as a trust root. The entry in .application.validators should look something like this (make sure to add your own public key to trust root default):

charts/connaisseur/values.yaml
- name: customvalidator\n  type: cosign\n  trustRoots:\n  - name: default\n    key: |  # YOUR PUBLIC KEY BELOW\n      -----BEGIN PUBLIC KEY-----\n      MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n      qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n      -----END PUBLIC KEY-----\n

In .application.policy, add a pattern to match your public key to your own repository:

charts/connaisseur/values.yaml
- pattern: \"docker.io/securesystemsengineering/testimage:co*\"  # YOUR REPOSITORY\n  validator: customvalidator\n

After installation, you are ready to verify your images against your public key:

helm install connaisseur helm --atomic --create-namespace --namespace connaisseur\n

A quick guide for installation and testing is available in getting started. In case you just use the default values for the validator and image policy given above, you are able to successfully validate our signed testimage:

kubectl run signed --image=docker.io/securesystemsengineering/testimage:co-signed\n

And compare this to the unsigned image:

kubectl run unsigned --image=docker.io/securesystemsengineering/testimage:co-unsigned\n

Or signed with a different key:

kubectl run altsigned --image=docker.io/securesystemsengineering/testimage:co-signed-alt\n
"},{"location":"validators/sigstore_cosign/#configuration-options","title":"Configuration options","text":"

.application.validators[*] in charts/connaisseur/values.yaml supports the following keys for Cosign (refer to basics for more information on default keys):

Key Default Required Description name - See basics. type - cosign; the validator type must be set to cosign. trustRoots[*].name - See basics. trustRoots[*].key - if not using keyless See basics. Public key from cosign.pub file or KMS URI. See additional notes below. trustRoots[*].keyless.issuer - if not using a key or issuerRegex The OIDC provider URL which attests the identity. trustRoots[*].keyless.subject - if not using a key or subjectRegex The identity that created the keyless signature. Usually an email address. trustRoots[*].keyless.issuerRegex - if not using a key or issuer Regex for the OIDC provider URL which attests the identity. trustRoots[*].keyless.subjectRegex - if not using a key or subject Regex of the identity that created the keyless signature. Usually an email address. When setting this, make sure you control all subject that can be matched. The pattern your.name@gmail.* also matches yourXname@gmail.com or your.name@gmail.attacker.com host.rekor rekor.sigstore.dev - Rekor URL to use for validation against the transparency log (default sigstore instance is rekor.sigstore.dev). Setting host enforces successful transparency log check to pass verification. See additional notes below. host.rekorPubkey Public key of rekor.sigstore.dev - Public key used to verify signature of log entry from Rekor. host.fulcioCert Root and intermediate certificates belonging to fulcio.sigstore.dev - The root certificate belonging the Fulcio CA which is used to create keyless signatures. host.ctLogPubkey Public key for the certificate transparency log provided by Sigstore - The public key needed for verifying Signed Certificate Timestamps (SCT). This will accept a single key. auth. - - Authentication credentials for registries with restricted access (e.g. private registries or ratelimiting). See additional notes below. auth.secretName - - Name of a Kubernetes secret in Connaisseur namespace that contains dockerconfigjson for registry authentication. See additional notes below. auth.k8sKeychain false - When true, pass --k8s-keychain argument to cosign verify in order to use workload identities for authentication. See additional notes below. cert - - A TLS certificate in PEM format for private registries with self-signed certificates.

.application.policy[*] in charts/connaisseur/values.yaml supports the following additional keys and modifications for sigstore/Cosign (refer to basics for more information on default keys):

Key Default Required Description with.trustRoot - Setting the name of trust root to \"*\" enables verification of multiple trust roots. Refer to section on multi-signature verification for more information. with.threshold - - Minimum number of signatures required in case with.trustRoot is set to \"*\". Refer to section on multi-signature verification for more information. with.required [] - Array of required trust roots referenced by name in case with.trustRoot is set to \"*\". Refer to section on multi-signature verification for more information. with.verifyInTransparencyLog true - Whether to include the verification using the Rekor tranparency log in the verification process. Refer to Tranparency log verification for more information. with.verifySCT true - Whether to verify the signed certificate timestamps inside the transparency log."},{"location":"validators/sigstore_cosign/#example","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: myvalidator\n    type: cosign\n    trustRoots:\n    - name: mykey\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n        qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"docker.io/securesystemsengineering/testimage:co-*\"\n    validator: myvalidator\n    with:\n      key: mykey\n
"},{"location":"validators/sigstore_cosign/#additional-notes","title":"Additional notes","text":""},{"location":"validators/sigstore_cosign/#authentication","title":"Authentication","text":"

When using a private registry for images and signature data, the credentials need to be provided to Connaisseur. There are two ways to do this.

"},{"location":"validators/sigstore_cosign/#dockerconfigjson","title":"dockerconfigjson","text":"

Create a dockerconfigjson Kubernetes secret in the Connaisseur namespace and pass the secret name to Connaisseur as auth.secretName. The secret can for example be created directly from your local config.json (for docker this resides in ~/.docker/config.json):

kubectl create secret generic my-secret \\\n  --from-file=.dockerconfigjson=path/to/config.json \\\n  --type=kubernetes.io/dockerconfigjson \\\n  -n connaisseur\n

The secret can also be generated directly from supplied credentials (which may differ from your local config.json, using:

kubectl create secret docker-registry my-secret \\\n  --docker-server=https://index.docker.io/v1/ \\\n  --docker-username='<your username>' \\\n  --docker-password='<your password>' \\\n  -n connaisseur\n

Info

At present, it seems to be necessary to suffix your registry server URL with /v1/. This may become unnecessary in the future.

In the above cases, the secret name in Connaisseur configuration would be secretName: my-secret. It is possible to provide one Kubernetes secret with a config.json for authentication to multiple private registries and referencing this in multiple validators.

"},{"location":"validators/sigstore_cosign/#k8s_keychain","title":"k8s_keychain","text":"

Specification of auth.k8sKeychain: true in the validator configuration passes the --k8s-keychain to cosign when performing image validation. Thus, k8schain is used by cosign to pick up ambient registry credentials from the environment and for example use workload identities in case of common cloud providers.

For example, when validating against an ECR private repository, the credentials of an IAM user allowed to perform actions ecr:GetAuthorizationToken, ecr:BatchGetImage, and ecr:GetDownloadUrlForLayer could be added to the secret connaisseur-env-secrets:

apiVersion: v1\nkind: Secret\ntype: Opaque\nmetadata:\n  name: connaisseur-env-secrets\n  ...\ndata:\n  AWS_ACCESS_KEY_ID: ***\n  AWS_SECRET_ACCESS_KEY: ***\n  ...\n

If k8sKeychain is set to true in the validator configuration, cosign will log into ECR at time of validation. See this cosign pull request for more details.

"},{"location":"validators/sigstore_cosign/#kms-support","title":"KMS Support","text":"

Connaisseur supports Cosign's URI-based KMS integration to manage the signing and verification keys. Simply configure the trust root key value as the respective URI. In case of a Kubernetes secret, this would take the following form:

charts/connaisseur/values.yaml
- name: myvalidator\n  type: cosign\n  trustRoots:\n  - name: mykey\n    key: k8s://connaisseur/cosignkeys\n

For that specific case of a Kubernetes secret, make sure to place it in a suitable namespace and grant Connaisseur access to it1.

Most other KMS will require credentials for authentication that must be provided via environment variables. Such environment variables can be injected into Connaisseur via deployment.envs in charts/connaisseur/values.yaml, e.g.:

charts/connaisseur/values.yaml
  envs:\n    VAULT_ADDR: myvault.com\n    VAULT_TOKEN: secrettoken\n
"},{"location":"validators/sigstore_cosign/#multi-signature-verification","title":"Multi-signature verification","text":"

Connaisseur can verify multiple signatures for a single image. It is possible to configure a threshold number and specific set of required valid signatures. This allows to implement several advanced use cases (and policies):

Multi-signature verification is scoped to the trust roots specified within a referenced validator. Consider the following validator configuration:

charts/connaisseur/values.yaml
application:\n  validators:\n  - name: multicosigner\n    type: cosign\n    trustRoots:\n    - name: alice\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEusIAt6EJ3YrTHdg2qkWVS0KuotWQ\n        wHDtyaXlq7Nhj8279+1u/l5pZhXJPW8PnGRRLdO5NbsuM6aT7pOcP100uw==\n        -----END PUBLIC KEY-----\n    - name: bob\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE01DasuXJ4rfzAEXsURSnbq4QzJ6o\n        EJ2amYV/CBKqEhhl8fDESxsmbdqtBiZkDV2C3znIwV16SsJlRRYO+UrrAQ==\n        -----END PUBLIC KEY-----\n    - name: charlie\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEEHBUYJVrH+aFYJPuryEkRyE6m0m4\n        ANj+o/oW5fLRiEiXp0kbhkpLJR1LSwKYiX5Toxe3ePcuYpcWZn8Vqe3+oA==\n        -----END PUBLIC KEY-----\n

The trust roots alice, bob, and charlie are all included for verification in case .application.policy[*].with.trustRoot is set to \"*\" (note that this is a special flag, not a real wildcard):

charts/connaisseur/values.yaml
- pattern: \"*:*\"\n  validator: multicosigner\n  with:\n    trustRoot: \"*\"\n

As neither threshold nor required are specified, Connaisseur will require signatures of all trust roots (alice, bob, and charlie) and deny an image otherwise. If either threshold or required is specified, it takes precedence. For example, it is possible to configure a threshold number of required signatures via the threshold key:

charts/connaisseur/values.yaml
- pattern: \"*:*\"\n  validator: multicosigner\n  with:\n    trustRoot: \"*\"\n    threshold: 2\n

In this case, valid signatures of two or more out of the three trust roots are required for admittance. Using the required key, it is possible to enforce specific trusted roots:

charts/connaisseur/values.yaml
- pattern: \"*:*\"\n  validator: multicosigner\n  with:\n    trustRoot: \"*\"\n    required: [\"alice\", \"bob\"]\n

Now, only images with valid signatures of trust roots alice and bob are admitted. It is possible to combine threshold and required keys:

charts/connaisseur/values.yaml
- pattern: \"*:*\"\n  validator: multicosigner\n  with:\n    trustRoot: \"*\"\n    threshold: 3\n    required: [\"alice\", \"bob\"]\n

Thus, at least 3 valid signatures are required and alice and bob must be among those.

"},{"location":"validators/sigstore_cosign/#transparency-log-verification","title":"Transparency log verification","text":"

The sigstore project contains a transparency log called Rekor that provides an immutable, tamper-resistant ledger to record signed metadata to an immutable record. While it is possible to run your own instance, a public instance of Rekor is available at rekor.sigstore.dev. With Connaisseur it is possible to verify that a signature was added to the transparency log via the validators host.rekor key (see Cosign docs). When the host.rekor key is set, e.g. to rekor.sigstore.dev for the public instance, Connaisseur requires that a valid signature was added to the transparency log and deny an image otherwise. Furthermore, the host.rekor allows switching to private Rekor instances, e.g. for usage with keyless signatures. To disable this feature the with.verifyInTransparencyLog key can be set to false. This is useful for example if the signature was made without an upload to the transparency log in the first place.

"},{"location":"validators/sigstore_cosign/#keyless-signatures","title":"Keyless signatures","text":"

Keyless signatures are a feature of Sigstore that allows to sign container images without the need to manage a private key. Instead the signatures are bound to identities, attested by OIDC providers, and use ephemeral keys, short-lived certificates and a transparency log under the hood to provide similar security guarantees. Further information on this topic can be found here.

When using keyless signatures, the trustRoots[*].keyless field can be used to specify the issuer and subject of the keyless signature. The issuer is the OIDC provider that attests the identity and the subject is the identity that created the keyless signature, usually an email address. Both fields are also available as regular expressions. The following example shows how to configure a validator for keyless signatures:

charts/connaisseur/values.yaml
- name: keylessvalidator\n  type: cosign\n  trustRoots:\n  - name: keyless\n    keyless:\n      issuerRegex: \"github\"\n      subject: \"philipp.belitz@securesystems.de\n

In case the signature was created using the Sigstore infrastructure, nothing else needs to be configured since Connaisseur will automatically retrieve all needed public keys and certificates from the Sigstore infrastructure. If the signature was created using a private infrastructure, the host.fulcioCert field can be used to specify the root certificate belonging to the Fulcio CA which is used to create the keyless signatures. The host.fulcioCert field should contain the root certificate in PEM format. The same applies to the host.ctLogPubkey field which can be used to specify the public key needed for verifying Signed Certificate Timestamps (SCT) and the host.rekorPubkey field which can be used to specify the public key used to verify the signature of log entries from Rekor.

charts/connaisseur/values.yaml
name: default\ntype: cosign\nhost:\n  rekorPubkey: |\n    -----BEGIN PUBLIC KEY-----\n    ...\n    -----END PUBLIC KEY-----\n  ctLogPubkey: | \n    -----BEGIN PUBLIC KEY-----\n    ...\n    -----END PUBLIC KEY-----\n  fulcioCert: |\n    -----BEGIN CERTIFICATE-----\n    ...\n    -----END CERTIFICATE-----\n  ...\n
  1. The corresponding role and rolebinding should look similar to the following:

    apiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n  name: connaisseur-kms-role\n  namespace: connaisseur  # namespace of respective k8s secret, might have to change that\n  labels:\n    app.kubernetes.io/name: connaisseur\nrules:\n- apiGroups: [\"*\"]\n  resources: [\"secrets\"]\n  verbs: [\"get\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: connaisseur-kms-rolebinding\n  namespace: connaisseur  # namespace of respective k8s secret, might have to change that\n  labels:\n    app.kubernetes.io/name: connaisseur\nsubjects:\n- kind: ServiceAccount\n  name: connaisseur-serviceaccount\n  namespace: connaisseur  # Connaisseur's namespace, might have to change that\nroleRef:\n  kind: Role\n  name: connaisseur-kms-role\n  apiGroup: rbac.authorization.k8s.io\n
    Make sure to adjust it as needed.\u00a0\u21a9

"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to Connaisseur","text":"

A Kubernetes admission controller to integrate container image signature verification and trust pinning into a cluster.

"},{"location":"#what-is-connaisseur","title":"What is Connaisseur?","text":"

Connaisseur ensures integrity and provenance of container images in a Kubernetes cluster. To do so, it intercepts resource creation or update requests sent to the Kubernetes cluster, identifies all container images and verifies their signatures against pre-configured public keys. Based on the result, it either accepts or denies those requests.

Connaisseur is developed under three core values: Security, Usability, Compatibility. It is built to be extendable and currently aims to support the following signing solutions:

It provides several additional features such as:

Feel free to reach out via GitHub Discussions!

"},{"location":"#quick-start","title":"Quick start","text":"

Getting started to verify image signatures is only a matter of minutes:

Warning

Only try this out on a test cluster as deployments with unsigned images will be blocked.

Connaisseur comes pre-configured with public keys for its own repository and Docker's official images (official images can be found here). It can be fully configured via charts/connaisseur/values.yaml. For a quick start, clone the Connaisseur repository:

git clone https://github.com/sse-secure-systems/connaisseur.git\n

Next, install Connaisseur via Helm:

helm install connaisseur helm --atomic --create-namespace --namespace connaisseur\n

Once installation has finished, you are good to go. Successful verification can be tested via official Docker images like hello-world:

kubectl run hello-world --image=docker.io/hello-world\n

Or our signed testimage:

kubectl run demo --image=docker.io/securesystemsengineering/testimage:signed\n

Both will return pod/<name> created. However, when trying to deploy an unsigned image:

kubectl run demo --image=docker.io/securesystemsengineering/testimage:unsigned\n

Connaisseur denies the request and returns an error (...) Unable to find signed digest (...). Since the images above are signed using Docker Content Trust, you can inspect the trust data using docker trust inspect --pretty <image-name>.

To uninstall Connaisseur use:

helm uninstall connaisseur --namespace connaisseur\n

Congrats you just validated the first images in your cluster! To get started configuring and verifying your own images and signatures, please follow our setup guide.

"},{"location":"#how-does-it-work","title":"How does it work?","text":"

Integrity and provenance of container images deployed to a Kubernetes cluster can be ensured via digital signatures. On a very basic level, this requires two steps:

  1. Signing container images after building
  2. Verifying the image signatures before deployment

Connaisseur aims to solve step two. This is achieved by implementing several validators, i.e. configurable signature verification modules for different signing solutions (e.g. Notary V1). While the detailed security considerations mainly depend on the applied solution, Connaisseur in general verifies the signature over the container image content against a trust anchor or trust root (e.g. public key) and thus let's you ensure that images have not been tampered with (integrity) and come from a valid source (provenance).

"},{"location":"#trusted-digests","title":"Trusted digests","text":"

But what is actually verified? Container images can be referenced in two different ways based on their registry, repository, image name (<registry>/<repository>/<image name>) followed by either tag or digest:

While the tag is a mutable, human readable description, the digest is an immutable, inherent property of the image, namely the SHA256 hash of its content. This also means that a tag can correspond to varying digests whereas digests are unique for each image. The container runtime (e.g. containerd) compares the image content with the received digest before spinning up the container. As a result, Connaisseur just needs to make sure that only trusted digests (signed by a trusted entity) are passed to the container runtime. Depending on how an image for deployment is referenced, it will either attempt to translate the tag to a trusted digest or validate whether the digest is trusted. How the digest is signed in detail, where the signature is stored, what it is verfied against and how different image distribution and updating attacks are mitigated depends on the signature solutions.

"},{"location":"#mutating-admission-controller","title":"Mutating admission controller","text":"

How to validate images before deployment to a cluster? The Kubernetes API is the fundamental fabric behind the control plane. It allows operators and cluster components to communicate with each other and, for example, query, create, modify or delete Kubernetes resources. Each request passes through several phases such as authentication and authorization before it is persisted to etcd. Among those phases are two steps of admission control: mutating and validating admission. In those phases the API sends admission requests to configured webhooks (admission controllers) and receives admission responses (admit, deny, or modify). Connaisseur uses a mutating admission webhook, as requests are not only admitted or denied based on the validation result but might also require modification of contained images referenced by tags to trusted digests. The webhook is configured to only forward resource creation or update requests to the Connaisseur service running inside the cluster, since only deployments of images to the cluster are relevant for signature verification. This allows Connaisseur to intercept requests before deployment and based on the validation:

"},{"location":"#image-policy-and-validators","title":"Image policy and validators","text":"

Now, how does Connaisseur process admission requests? A newly received request is first inspected for container image references that need to be validated (1). The resulting list of images referenced by tag or digest is passed to the image policy (2). The image policy matches the identified images to the configured validators and corresponding trust roots (e.g. public keys) to be used for verification. Image policy and validator configuration form the central logic behind Connaisseur and are described in detail und basics. Validation is the step where the actual signature verification takes place (3). For each image, the required trust data is retrieved from external sources such as Notary server, registry or sigstore transparency log and validated against the pre-configured trust root (e.g. public key). This forms the basis for deciding on the request (4). In case no trusted digest is found for any of the images (i.e. either no signed digest available or no signature matching the public key), the whole request is denied. Otherwise, Connaisseur translates all image references in the original request to trusted digests and admits it (5).

"},{"location":"#compatibility","title":"Compatibility","text":"

Supported signature solutions and configuration options are documented under validators.

Connaisseur supports Kubernets v1.16 and higher. It is expected to be compatible with most Kubernetes services and has been successfully tested with:

All registry interactions use the OCI Distribution Specification that is based on the Docker Registry HTTP API V2 which is the standard for all common image registries. For using Notary (V1) as a signature solution, only some registries provide the required Notary server attached to the registry with e.g. shared authentication. Connaisseur has been tested with the following Notary (V1) supporting image registries:

In case you identify any incompatibilities, please create an issue

"},{"location":"#versions","title":"Versions","text":"

The latest stable version of Connaisseur is available on the master branch. Releases follow semantic versioning standards to facilitate compatibility. For each release, a signed container image tagged with the version is published in the Connaisseur Docker Hub repository. Latest developments are available on the develop branch, but should be considered unstable and no pre-built container image is provided.

"},{"location":"#development","title":"Development","text":"

Connaisseur is open source and open development. We try to make major changes transparent via Architecture Decision Records (ADRs) and announce developments via GitHub Discussions. Information on responsible disclosure of vulnerabilities and tracking of past findings is available in the Security Policy. Bug reports should be filed as GitHub issues to share status and potential fixes with other users.

We hope to get as many direct contributions and insights from the community as possible to steer further development. Please refer to our contributing guide, create an issue or reach out to us via GitHub Discussions

"},{"location":"#wall-of-fame","title":"Wall of fame","text":"

Thanks to all the fine people directly contributing commits/PRs to Connaisseur:

Big shout-out also to all who support the project via issues, discussions and feature requests

"},{"location":"#resources","title":"Resources","text":"

Several resources are available to learn more about Connaisseur and related topics:

"},{"location":"CODE_OF_CONDUCT/","title":"Contributor Covenant Code of Conduct","text":""},{"location":"CODE_OF_CONDUCT/#our-pledge","title":"Our pledge","text":"

In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.

"},{"location":"CODE_OF_CONDUCT/#our-standards","title":"Our standards","text":"

Examples of behavior that contributes to creating a positive environment include:

Examples of unacceptable behavior by participants include:

"},{"location":"CODE_OF_CONDUCT/#our-responsibilities","title":"Our responsibilities","text":"

Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.

"},{"location":"CODE_OF_CONDUCT/#scope","title":"Scope","text":"

This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.

"},{"location":"CODE_OF_CONDUCT/#enforcement","title":"Enforcement","text":"

Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at connaisseur@securesystems.dev. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.

"},{"location":"CODE_OF_CONDUCT/#attribution","title":"Attribution","text":"

This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq

"},{"location":"CONTRIBUTING/","title":"Contributing","text":"

We hope to steer development of Connaisseur from demand of the community and are excited about direct contributions to improve the tool!

The following guide is meant to help you get started with contributing to Connaisseur. In case of questions or feedback, feel free to reach out to us.

We are committed to positive interactions between all contributors of the project. To ensure this, please follow the Code of Conduct in all communications.

"},{"location":"CONTRIBUTING/#discuss-problems-raise-bugs-and-propose-feature-ideas","title":"Discuss problems, raise bugs and propose feature ideas","text":"

We are happy you made it here! In case you want to share your feedback, need support, want to discuss issues from using Connaisseur in your own projects, have ideas for new features or just want to connect with us, please reach out via GitHub Discussions. If you want to raise any bugs you found or make a feature request, feel free to create an issue with an informative title and description.

While issues are a great way to discuss problems, bugs and new features, a direct proposal via a pull request can sometimes say more than a thousand words. So be bold and contribute to the code as described in the next section!

In case you require a more private communication, you can reach us via connaisseur@securesystems.dev.

"},{"location":"CONTRIBUTING/#contribute-to-source-code","title":"Contribute to source code","text":"

The following steps will help you make code contributions to Connaisseur and ensure good code quality and workflow. This includes the following steps:

  1. Set up your environment: Set up up your local environment to best interact with the code. Further information is given below.
  2. Make atomic changes: Changes should be atomic. As such, pull requests should contain only few commits, and each commit should only fix one issue or implement one feature, with a concise commit message.
  3. Test your changes: Test any changes locally for code quality and functionality and add new tests for any additional code. How to test is described below.
  4. Create semantic, conventional and signed commits: Any commits should follow a simple semantic convention to help structure the work on Connaisseur. The convention is described below. For security reasons and since integrity is at the core of this project, code merged into master must be signed. How we achieve this is described below.
  5. Create pull requests: We consider code review central to quality and security of code. Therefore, a pull request (PR) to the develop branch should be created for each contribution. It will be reviewed, and potential improvements may be discussed within the PR. After approval, changes will be merged and moved to the master branch with the next release.
"},{"location":"CONTRIBUTING/#set-up-the-environment","title":"Set up the environment","text":"

To start contributing, you will need to set up your local environment. First step is to get the source code by cloning this repository:

git clone git@github.com:sse-secure-systems/connaisseur.git\n
In order to review the effects of your changes, you should create your own Kubernetes cluster and install Connaisseur. This is described in the getting started. A simple starting point may be a minikube cluster with e.g. a Docker Hub repository for maintaining your test images and trust data.

In case you make changes to the Connaisseur container image itself or code for that matter, you need to re-build the image and install it locally for testing. This requires a few steps:

  1. Get the Connaisseur image ready:
    • Using minikube, the local environment needs to be configured to use the minikube Docker daemon before building the image:
      1. Run eval $(minikube docker-env).
      2. Run make docker.
    • Using kind, the image needs to be built first and then loaded onto the kind node:
      1. Run make docker.
      2. Run IMAGE_REPO=$(yq e '.kubernetes.deployment.image.repository' charts/connaisseur/values.yaml) && VERSION=$(yq e '.appVersion' charts/connaisseur/Chart.yaml).
      3. Run kind load docker-image ${IMAGE_REPO}:v${VERSION}.
  2. Install Connaisseur via make install-dev.
"},{"location":"CONTRIBUTING/#test-changes","title":"Test changes","text":"

Tests and linting are important to ensure code quality, functionality and security. We therefore aim to keep the code coverage high. We are running several automated tests in the CI pipeline. Application code is tested via Go's testing package and linted via golangci-lint and gosec. When making changes to the application code, please directly provide tests for your changes.

Changes can and should be tested locally via running make test.

Linters can be run locally via

docker run --rm -v $(pwd):/app -w /app/cmd/connaisseur securego/gosec gosec ./...\ndocker run --rm -v $(pwd):/app -w /app golangci/golangci-lint golangci-lint run -v --timeout=10m --skip-dirs=\"test\"\n

from the root folder.

This helps identify bugs in changes before pushing.

INFO We believe that testing should not only ensure functionality, but also aim to test for expected security issues like injections and appreciate if security tests are added with new functionalities.

Besides the unit testing and before any PR can be merged, an integration test is carried out whereby:

You can also run this integration test on a local cluster. There is a more detailed guided on how to do that.

If you are changing documentation, you can simply inspect your changes locally via:

docker run --rm -it -p 8000:8000 -v ${PWD}:/docs squidfunk/mkdocs-material\n
"},{"location":"CONTRIBUTING/#signed-commits-and-pull-requests","title":"Signed commits and pull requests","text":"

All changes to the develop and master branch must be signed which is enforced via branch protection. This can be achieved by only fast-forwarding signed commits or signing of merge commits by a contributor. Consequently, we appreciate but do not require that commits in PRs are signed.

A general introduction into signing commits can for example be found in the With Blue Ink blog. For details on setting everything up for GitHub, please follow the steps in the Documentation.

Once you have generated your local GPG key, added it to your GitHub account and informed Git about it, you are set up to create signed commits. We recommend to configure Git to sign commits by default via:

git config commit.gpgsign true\n
This avoids forgetting to use the -S flag when committing changes. In case it happens anyways, you can always rebase to sign earlier commits:
git rebase -i master\n
You can then mark all commits that need to be signed as edit and sign them without any other changes via:
git commit -S --amend --no-edit\n
Finally, you force push to overwrite the unsigned commits via git push -f.

"},{"location":"CONTRIBUTING/#semantic-and-conventional-commits","title":"Semantic and conventional commits","text":"

For Connaisseur, we want to use semantic and conventional commits to ensure good readability of code changes. A good introduction to the topic can be found in this blog post.

Commit messages should consist of header, body and footer. Such a commit message takes the following form:

git commit -m \"<header>\" -m \"<body>\" -m \"<footer>\"\n
The three parts should consist of the following:

We want to use the following common types in the header:

A complete commit message could therefore look as follows:

git commit -m \"fix: extend registry validation regex to custom ports\" -m \"The current regex used for validation of the image name does not allow using non-default ports for the image repository name. The regex is extended to optionally provide a port number.\" -m \"Fix #3\"\n

"},{"location":"CONTRIBUTING/#enjoy","title":"Enjoy!","text":"

Please be bold and contribute!

"},{"location":"RELEASING/","title":"Releasing","text":"

Releasing a new version of Connaisseur includes the following steps:

"},{"location":"RELEASING/#check-readiness","title":"Check readiness","text":"

Before starting the release, make sure everything is ready and in order:

"},{"location":"RELEASING/#add-new-tag","title":"Add new tag","text":"

Before adding the new tag, make sure the Connaisseur version is updated in the charts/connaisseur/Chart.yaml and applies the semantic versioning guidelines: fixes increment PATCH version, non-breaking features increment MINOR version, breaking features increment MAJOR version. Then add the tag (on develop branch) with git tag v<new-connaisseur-version> (e.g. git tag v1.4.6).

"},{"location":"RELEASING/#create-changelog","title":"Create changelog","text":"

A changelog text, including all new commits from one to another version, can be automatically generated using the scrips/changelogger.py script. Run python scripts/changelogger.py > CHANGELOG.md to get the changelog between the two latest tags. If you want to have a diff between certain commits, you have to set the two ref1 and ref2 variables. If you e.g. want to get the changelog from v1.4.5 to 09fd2379cf2374ba9fdc8a84e56d959a176f1569, then you have to run python scripts/changelogger.py --ref1=\"v1.4.5\" --ref2=\"09fd2379cf2374ba9fdc8a84e56d959a176f1569\"> CHANGELOG.md, storing the changelog in a new file CHANGELOG.md (we won't keep this file, it's just for convenient storing purpose). This file will include all new commits, categorized by their type (e.g. fix, feat, docs, etc.), but may include some mistakes so take a manual look if everything looks in order.

Things to look out for:

"},{"location":"RELEASING/#create-pr","title":"Create PR","text":"

Create a PR from develop to master, putting the changelog text as description and wait for someone to approve it.

"},{"location":"RELEASING/#push-new-connaisseur-image","title":"Push new Connaisseur image","text":"

When the PR is approved and ready to be merged, first push the new Connaisseur image to Docker Hub, as it will be used in the release pipeline. Run make docker to build the new version of the docker image and then DOCKER_CONTENT_TRUST=1 docker image push securesystemsengineering/connaisseur:<new-version> to push and sign it. You'll obviously need the right private key and passphrase for doing so. You also need to be in the list of valid signers for Connaisseur. If not already (you can check with docker trust inspect securesystemsengineering/connaisseur --pretty) you'll need to contact Philipp Belitz.

"},{"location":"RELEASING/#merge-pr","title":"Merge PR","text":"

Run git checkout master to switch to the master branch and then run git merge develop to merge develop in. Then run git push origin master --tags to publish all changes and the new tag.

"},{"location":"RELEASING/#create-release-page","title":"Create release page","text":"

Finally a release on GitHub should be created. Go to the Connaisseur releases page, then click Draft a new release. There you have to enter the new tag version, a title (usually Version <new-version>) and the changelog text as description. Then click Publish release and you're done! (You can delete the CHANGELOG.md file now. Go and do it.)

"},{"location":"RELEASING/#check-released-artifacts","title":"Check released artifacts","text":"

To ensure the release worked as intended, check the following artifacts are present:

"},{"location":"RELEASING/#shoot-trouble","title":"Shoot trouble","text":"

Be aware that this isn't a completely fleshed out, highly available, hyper scalable and fully automated workflow, backed up by state-of-the-art blockchain technology and 24/7 incident response team coverage with global dominance! Not yet at least. For now things will probably break, so make sure that in the end everything looks to be in order and the new release can be seen on the GitHub page, tagged with Latest release and pointing to the correct version of Connaisseur. Good Luck!

For breaking changes, the upgrade integration test will fail (as intended), blocking the automatic release. In that case, you can manually trigger the publish job with the expected Connaisseur version.

"},{"location":"SECURITY/","title":"Security Policy","text":""},{"location":"SECURITY/#supported-versions","title":"Supported versions","text":"

While all known vulnerabilities in the Connaisseur application are listed below and we intent to fix vulnerabilities as soon as we become aware, both, Python and OS packages of the Connaisseur image may become vulnerable over time and we suggest to frequently update to the latest version of Connaisseur or rebuilding the image from source yourself. At present, we only support the latest version. We stick to semantic versioning, so unless the major version changes, updating Conaisseur should never break your installation.

"},{"location":"SECURITY/#known-vulnerabilities","title":"Known vulnerabilities","text":"Title Affected versions Fixed version Description initContainers not validated \u2264 1.3.0 1.3.1 Prior to version 1.3.1 Connaisseur did not validate initContainers which allowed deploying unverified images to the cluster. Ephemeral containers not validated \u2264 3.1.1 3.2.0 Prior to version 3.2.0 Connaisseur did not validate ephemeral containers (introduced in k8s 1.25) which allowed deploying unverified images to the cluster. Regex Denial of Service for Notary delegations \u2264 3.3.0 3.3.1 Prior to version 3.3.1 Connaisseur did input validation on the names of delegations in an unsafe manner: An adversary with the ability to alter Notary responses, in particular an evil Notary server, could have provided Connaisseur with an invalid delegation name that would lead to catastrophic backtracking during a regex matching. Only users of type notaryv1 validators are affected as Connaisseur will only perform this kind of input validation in the context of a Notary validation. If you mistrust the Docker Notary server, the default configuration is vulnerable as it contains a notaryv1 validator with the root keys of both Connaisseur and the library of official Docker images."},{"location":"SECURITY/#reporting-a-vulnerability","title":"Reporting a vulnerability","text":"

We are very grateful for reports on vulnerabilities discovered in the project, specifically as it is intended to increase security for the community. We aim to investigate and fix these as soon as possible. Please submit vulnerabilities to connaisseur@securesystems.dev.

"},{"location":"basics/","title":"Basics","text":"

In the following, we aim to lay the foundation on Connaisseur's core concepts, how to configure and administer it.

"},{"location":"basics/#admission-control-validators-and-image-policy","title":"Admission control, validators and image policy","text":"

Connaisseur works as a mutating admission controller. It intercepts all CREATE and UPDATE resource requests for Pods, Deployments, ReplicationControllers, ReplicaSets, DaemonSets, StatefulSets, Jobs, and CronJobs and extracts all image references for validation.

Per default, Connaisseur uses automatic child approval by which the child of a Kubernetes resource is automatically admitted without re-verification of the signature in order to avoid duplicate validation and handle inconsistencies with the image policy. Essentially, this is done since an image that is deployed as part of an already deployed object (e.g. a Pod deployed as a child of a Deployment) has already been validated and potentially mutated during admission of the parent. More information and configuration options can be found in the feature documentation for automatic child approval.

Validation itself relies on two core concepts: image policy and validators. A validator is a set of configuration options required for validation like the type of signature, public key to use for verification, path to signature data, or authentication. The image policy defines a set of rules which maps different images to those validators. This is done via glob matching of the image name which for example allows to use different validators for different registries, repositories, images or even tags. This is specifically useful when using public or external images from other entities like Docker's official images or different keys in a more complex development team.

Note

Typically, the public key of a known entity is used to validate the signature over an image's content in order to ensure integrity and provenance. However, other ways to implement such trust pinning exist and as a consequence we refer to all types of trust anchors in a generalized form as trust roots.

"},{"location":"basics/#using-connaisseur","title":"Using Connaisseur","text":"

Some general administration tasks like deployment or uninstallation when using Connaisseur are described in this section.

"},{"location":"basics/#requirements","title":"Requirements","text":"

Using Connaisseur requires a Kubernetes cluster, Helm and, if installing from source, Git to be installed and set up.

"},{"location":"basics/#get-the-codechart","title":"Get the code/chart","text":"

Download the Connaisseur resources required for installation either by cloning the source code via Git or directly add the chart repository via Helm.

Git repoHelm chart

The Connaisseur source code can be cloned directly from GitHub and includes the application and Helm charts in a single repository:

git clone https://github.com/sse-secure-systems/connaisseur.git\n

The Helm chart can be added by:

helm repo add connaisseur https://sse-secure-systems.github.io/connaisseur/charts\n
"},{"location":"basics/#configure","title":"Configure","text":"

The configuration of Connaisseur is completely done in the charts/connaisseur/values.yaml. The upper kubernetes section offers some general Kubernetes typical configurations like image version or resources. Noteworthy configurations are:

The actual configuration consists of the application.validators and image application.policy sections. These are described in detail below and for initials steps it is instructive to follow the getting started guide. Other features are described on the respective pages.

Connaisseur ships with a pre-configuration that does not need any adjustments for testing. However, validating your own images requires additional configuration.

"},{"location":"basics/#deploy","title":"Deploy","text":"

Install Connaisseur via Helm or Kubernetes manifests:

Git repoHelm chartKubernetes manifests

Install Connaisseur by using the Helm template definition files in the helm directory:

helm install connaisseur helm --atomic --create-namespace --namespace connaisseur\n

Install Connaisseur using the default configuration from the chart repository:

helm install connaisseur connaisseur/connaisseur --atomic --create-namespace --namespace connaisseur\n

To customize Connaisseur, craft a values.yaml according to your needs and apply:

helm install connaisseur connaisseur/connaisseur --atomic --create-namespace --namespace connaisseur -f values.yaml\n

Installing Conaisseur via Kubernetes manifests requires to first render the respecitve resources. If the repo was cloned, simply render templates via:

helm template helm -n connaisseur > deploy.yaml\n
Next, the admission controller is deployed step-wise:

  1. Create target namespace:
    kubectl create namespace connaisseur\n
  2. Setup the preliminary hook:
    kubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-init' -n connaisseur\n
  3. Deploy core resources
    kubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-core' -n connaisseur\n
  4. Arm the webhook
    kubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-webhook' -n connaisseur\n

This deploys Connaisseur to its own namespace called connaisseur. The installation itself may take a moment, as the installation order of the Connaisseur components is critical: The admission webhook for intercepting requests can only be applied when the Connaisseur pods are up and ready to receive admission requests.

"},{"location":"basics/#check","title":"Check","text":"

Once everything is installed, you can check whether all the pods are up by running kubectl get all -n connaisseur:

kubectl get all -n connaisseur\n
Output
NAME                                          READY   STATUS    RESTARTS   AGE\npod/connaisseur-deployment-78d8975596-42tkw   1/1     Running   0          22s\npod/connaisseur-deployment-78d8975596-5c4c6   1/1     Running   0          22s\npod/connaisseur-deployment-78d8975596-kvrj6   1/1     Running   0          22s\n\nNAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE\nservice/connaisseur-svc   ClusterIP   10.108.220.34   <none>        443/TCP   22s\n\nNAME                                     READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment.apps/connaisseur-deployment   3/3     3            3           22s\n\nNAME                                                DESIRED   CURRENT   READY   AGE\nreplicaset.apps/connaisseur-deployment-78d8975596   3         3         3       22s\n
"},{"location":"basics/#use","title":"Use","text":"

To use Connaisseur, simply try running some images or apply a deployment. In case you use the pre-configuration, you could for example run the following commands:

kubectl run demo --image=docker.io/securesystemsengineering/testimage:unsigned\n> Error from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request (...).\n\nkubectl run demo --image=docker.io/securesystemsengineering/testimage:signed\n> pod/demo created\n
"},{"location":"basics/#upgrade","title":"Upgrade","text":"

A running Connaisseur instance can be updated by a Helm upgrade of the current release:

Git repoHelm chartKubernetes manifests

Adjust configuration in charts/connaisseur/values.yaml as required and upgrade via:

helm upgrade connaisseur helm -n connaisseur --wait\n

Adjust your local configuration file (e.g. values.yaml) as required and upgrade via:

helm upgrade connaisseur connaisseur/connaisseur -n connaisseur --wait -f values.yaml\n

Adjust your local Kubernetes manifests (e.g. deploy.yaml) as required and upgrade via delete and reinstall:

kubectl delete -f deploy.yaml -n connaisseur\nkubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-init' -n connaisseur\nkubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-core' -n connaisseur\nkubectl apply -f deploy.yaml -l 'app.kubernetes.io/component=connaisseur-webhook' -n connaisseur\n

Note

Rolling upgrades as with Helm might also be possible, but likely require further configuration. Insights are welcome

"},{"location":"basics/#delete","title":"Delete","text":"

Just like for installation, Helm can also be used to delete Connaisseur from your cluster:

Git repoHelm chartKubernetes manifests

Uninstall via Helm:

helm uninstall connaisseur -n connaisseur\n

Uninstall via Helm:

helm uninstall connaisseur -n connaisseur\n

Delete via manifests:

kubectl delete -f deploy.yaml -n connaisseur\n

In case uninstallation fails or problems occur during subsequent installation, you can manually remove all resources:

kubectl delete all,mutatingwebhookconfigurations,clusterroles,clusterrolebindings,configmaps,imagepolicies,secrets,serviceaccounts,customresourcedefinitions -lapp.kubernetes.io/instance=connaisseur\nkubectl delete namespaces connaisseur\n

Connaisseur for example also installs a CutstomResourceDefinition imagepolicies.connaisseur.policy that validates its configuration. In case of major releases, the configuration structure might change which can cause installation to fail and you might have to delete it manually.

"},{"location":"basics/#makefile","title":"Makefile","text":"

Alternatively to using Helm, you can also run the Makefile for installing, deleting and more. Here the available commands:

"},{"location":"basics/#detailed-configuration","title":"Detailed configuration","text":"

All configuration is done in the charts/connaisseur/values.yaml. The configuration of features is only described in the corresponding section. Any configuration of the actual application is done below the application key, so when below we write validators, this actually corresponds to the application.validators key in the charts/connaisseur/values.yaml.

"},{"location":"basics/#validators","title":"Validators","text":"

The validators are configured in the validators field, which defines a list of validator objects.

A validator defines what kind of signatures are to be expected, how signatures are to be validated, against which trust root and how to access the signature data. For example, images might be signed with Docker Content Trust and reside in a private registry. Thus the validator would need to specify notaryv1 as type, the notary host and the required credentials.

The specific validator type should be chosen based on the use case. A list of supported validator types can be found here. All validators share a similar structure for configuration. For specifics and additional options, please review the dedicated page of the validator type.

There is a special behavior, when a validator or one of the trust roots is named default. In this case, should an image policy rule not specify a validator or trust root to use, the one named default will be used instead. This also means there can only be one validator named default and for the trust roots, there can only be one called default within a single validator.

Connaisseur comes with a few validators pre-configured including one for Docker's official images. The pre-configured validators can be removed. However to avoid Connaisseur failing its own validation in case you remove the securesystemsengineering_official key, make sure to also exclude Connaisseur from validation either via the static allow validator or namespaced validation. The special case of static validators used to simply allow or deny images without verification is described below.

"},{"location":"basics/#configuration-options","title":"Configuration options","text":"

.validators[*] in charts/connaisseur/values.yaml supports the following keys:

Key Default Required Description name - Name of the validator, which is referenced in the image policy. It must consist of lower case alphanumeric characters or '-'. If the name is default, it will be used if no validator is specified. type - Type of the validator, e.g. notaryv1 or cosign, which is dependent on the signing solution in use. trustRoots - List of trust anchors to validate the signatures against. In practice, this is typically a list of public keys. trustRoots[*].name - Name of the trust anchor, which is referenced in the image policy. If the name is default, it will be used if no key is specified. trustRoots[*].key - Value of the trust anchor, most commonly a PEM encoded public key. auth - - Credentials that should be used in case authentication is required for validation. Details are provided on validator-specific pages.

Further configuration fields specific to the validator type are described in the respective section.

"},{"location":"basics/#example","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: default\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: default\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\n        d0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n        -----END PUBLIC KEY-----\n    auth:\n      username: superuser\n      password: lookatmeimjumping\n  - name: myvalidator\n    type: cosign\n    trustRoots:\n    - name: mykey\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEIFXO1w6oj0oI2Fk9SiaNJRKTiO9d\n        ksm6hFczQAq+FDdw0istEdCwcHO61O/0bV+LC8jqFoomA28cT+py6FcSYw==\n        -----END PUBLIC KEY-----\n
"},{"location":"basics/#static-validators","title":"Static validators","text":"

Static validators are a special type of validator that does not validate any signatures. Depending on the approve value being true or false, they either allow or deny all images for which they are specified as validator. This for example allows to implement an allowlist or denylist.

"},{"location":"basics/#configuration-options_1","title":"Configuration options","text":"Key Default Required Description name - Name of the validator, which will be used to reference it in the image policy. type - static; value has to be static for a static validator. approve - true or false to admit or deny all images."},{"location":"basics/#example_1","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: deny\n    type: static\n    approve: false\n
"},{"location":"basics/#image-policy","title":"Image policy","text":"

The image policy is defined in the policy field and acts as a list of rule objects to determine which image should be validated by which validator (and potentially some further configurations).

For each image in the admission request, only a single rule in the image policy will apply: the one with the most specific matching pattern field. This is determined by the following algorithm:

  1. A given image is matched against all rule patterns.
  2. All matching patterns are compared to one another to determine the most specific one (see below). Only two patterns are compared at a time; the more specific one then is compared to the next one and so forth. Specificity is determined as follows:
    1. Patterns are split into components (delimited by \"/\"). The pattern that has a higher number of components wins (is considered more specific).
    2. Should the two patterns that are being compared have equal number of components, the longest common prefix between each pattern component and corresponding image component are calculated (for this purpose, image identifiers are also split into components). The pattern with the longest common prefix in one component, starting from the leftmost, wins.
    3. Should all longest common prefixes of all components between the two compared patterns be equal, the pattern with a longer component, starting from the leftmost, wins.
    4. The rule whose pattern has won all comparisons is considered the most specific rule.
  3. Return the most specific rule.

Should an image match none of the rules, Connaisseur will deny the request and raise an error. This deny per default behavior can be changed via a catch-all rule *:* and for example using the static allow validator in order to admit otherwise unmatched images.

In order to perform the actual validation, Connaisseur will call the validator specified in the selected rule and pass the image name and potential further configuration to it. The reference to validator and exact trust root is resolved in the following way:

  1. The validator with name (validators[*].name) equal to the validator value in the selected rule is chosen. If no validator is specified, the validator with name default is used if it exists.
  2. Of that validator, the trust root (e.g. public key) is chosen whose name (.validators.trustRoots[*].name) matches the policies trust root string (with.trustRoot). If no trust root is specified, the trust root with name default is used if it exists. Specifying \"*\" enables signature verification under any trust root in the validator.

Let's review the pattern and validator matching at a minimal example. We consider the following validator and policy configuration (most fields have been omitted for clarity):

charts/connaisseur/values.yaml
application:\n  validators:\n  - name: default     # validator 1\n    trustRoots:\n    - name: default   # key 1\n      key: |\n        ...\n  - name: myvalidator # validator 2\n    trustRoots:\n    - name: default   # key 2\n      key: |\n        ...\n    - name: mykey     # key 3\n      key: |\n        ...\n\n  policy:\n  - pattern: \"*:*\"                      # rule 1\n  - pattern: \"docker.io/myrepo/*:*\"     # rule 2\n    validator: myvalidator\n  - pattern: \"docker.io/myrepo/myimg:*\" # rule 3\n    validator: myvalidator\n    with:\n      trustRoot: mykey\n

Now deploying the following images we would get the matchings:

Connaisseur ships with a few rules pre-configured. There is two rules that should remain intact in some form in order to not brick the Kubernetes cluster:

"},{"location":"basics/#configuration-options_2","title":"Configuration options","text":"

.policy[*] in charts/connaisseur/values.yaml supports the following keys:

Key Default Required Description pattern - Globbing pattern to match an image name against. validator default - Name of a validator in the validators list. If not provided, the validator with name default is used if it exists. with - - Additional parameters to use for a validator. See more specifics in validator section. with.trustRoot default - Name of a trust root, which is specified within the referenced validator. If not provided, the trust root with name default is used if it exists. Setting this to \"*\" implements a logical or and enables signature verification under any trust root in the validator. with.mode mutate - Mode of operation which specifies whether or not image references should be mutated after successful image validation. If set to mutate, Connaisseur operates mutates image references to include digests. If set to insecureValidateOnly, Connaisseur will not mutate the digests. This leaves the risk of a malicious registry serving a different image under the signed tag."},{"location":"basics/#example_2","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  policy:\n  - pattern: \"*:*\"\n  - pattern: \"docker.io/myrepo/*:*\"\n    validator: myvalidator\n    with:\n      trustRoot: mykey\n  - pattern: \"docker.io/myrepo/deniedimage:*\"\n    validator: deny\n  - pattern: \"docker.io/myrepo/allowedimage:v*\"\n    validator: allow\n    with:\n      mode: insecureValidateOnly\n
"},{"location":"basics/#common-examples","title":"Common examples","text":"

Let's look at some useful examples for the validators and policy configuration. These can serve as a first template beyond the pre-configuration or might just be instructive to understand validators and policies.

We assume your repository is docker.io/myrepo and a public key has been created. In case this repository is private, authentication would have to be added to the respective validator for example via:

charts/connaisseur/values.yaml
    auth:\n      secretName: k8ssecret\n

The Kubernetes secret would have to be created separately according to the validator documentation.

"},{"location":"basics/#case-only-validate-own-images-and-deny-all-others","title":"Case: Only validate own images and deny all others","text":"

This is likely the most common case in simple settings by which only self-built images are used and validated against your own public key:

charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: default\n    type: notaryv1  # or e.g. 'cosign'\n    host: notary.docker.io  # only required in case of notaryv1\n    trustRoots:\n    - name: default\n      key: |  # your public key below\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n        qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n        -----END PUBLIC KEY-----\n  - name: dockerhub_basics\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: securesystemsengineering_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\n        d0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"*:*\"\n  - pattern: \"registry.k8s.io/*:*\"\n    validator: allow\n  - pattern: \"docker.io/securesystemsengineering/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: securesystemsengineering_official\n
"},{"location":"basics/#case-only-validate-own-images-and-deny-all-others-faster","title":"Case: Only validate own images and deny all others (faster)","text":"

This configuration achieves the same as the one above, but is faster as trust data only needs to be requested for images in your repository:

charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: deny\n    type: static\n    approve: false\n  - name: default\n    type: notaryv1  # or e.g. 'cosign'\n    host: notary.docker.io  # only required in case of notaryv1\n    trustRoots:\n    - name: default\n      key: |  # your public key below\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n        qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n        -----END PUBLIC KEY-----\n  - name: dockerhub_basics\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: securesystemsengineering_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\n        d0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"*:*\"\n    validator: deny\n  - pattern: \"docker.io/myrepo/*:*\"\n  - pattern: \"registry.k8s.io/*:*\"\n    validator: allow\n  - pattern: \"docker.io/securesystemsengineering/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: securesystemsengineering_official\n

The *:* rule could also have been omitted as Connaisseur denies unmatched images. However, explicit is better than implicit.

"},{"location":"basics/#case-only-validate-docker-hub-official-images-and-deny-all-others","title":"Case: Only validate Docker Hub official images and deny all others","text":"

In case only validated Docker Hub official images should be admitted to the cluster:

charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: deny\n    type: static\n    approve: false\n  - name: dockerhub_basics\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: docker_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEOXYta5TgdCwXTCnLU09W5T4M4r9f\n        QQrqJuADP6U7g5r9ICgPSmZuRHP/1AYUfOQW3baveKsT969EfELKj1lfCA==\n        -----END PUBLIC KEY-----\n    - name: securesystemsengineering_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\n        d0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"*:*\"\n    validator: deny\n  - pattern: \"docker.io/library/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: docker_official\n  - pattern: \"registry.k8s.io/*:*\"\n    validator: allow\n  - pattern: \"docker.io/securesystemsengineering/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: securesystemsengineering_official\n
"},{"location":"basics/#case-only-validate-docker-hub-official-images-and-allow-all-others","title":"Case: Only validate Docker Hub official images and allow all others","text":"

In case only Docker Hub official images should be validated while all others are simply admitted:

charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: deny\n    type: static\n    approve: false\n  - name: dockerhub_basics\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: docker_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEOXYta5TgdCwXTCnLU09W5T4M4r9f\n        QQrqJuADP6U7g5r9ICgPSmZuRHP/1AYUfOQW3baveKsT969EfELKj1lfCA==\n        -----END PUBLIC KEY-----\n    - name: securesystemsengineering_official\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\n        d0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"*:*\"\n    validator: allow\n  - pattern: \"docker.io/library/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: docker_official\n  - pattern: \"registry.k8s.io/*:*\"\n    validator: allow\n  - pattern: \"docker.io/securesystemsengineering/*:*\"\n    validator: dockerhub_basics\n    with:\n      trustRoot: securesystemsengineering_official\n
"},{"location":"basics/#case-directly-admit-own-images-and-deny-all-others","title":"Case: Directly admit own images and deny all others","text":"

As a matter of fact, Connaisseur can also be used to restrict the allowed registries and repositories without signature validation:

charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: allow\n    type: static\n    approve: true\n  - name: deny\n    type: static\n    approve: false\n\n  policy:\n  - pattern: \"*:*\"\n    validator: deny\n  - pattern: \"docker.io/myrepo/*:*\"\n    validator: allow\n  - pattern: \"registry.k8s.io/*:*\"\n    validator: allow\n  - pattern: \"docker.io/securesystemsengineering/*:*\"\n    validator: allow\n
  1. This is not to be confused with the detection mode feature: In detection mode, Conaisseur service admits all requests to the cluster independent of the validation result while the failure policy only takes effect when the service itself becomes unavailable.\u00a0\u21a9

  2. During the first mutation, Connaisseur converts the image tag to its digests. Read more in the overview of Connaisseur \u21a9

  3. In those cases, consider using security annotations via kubernetes.deployment.annotations or pod security policies kubernetes.deployment.podSecurityPolicy if available.\u00a0\u21a9

"},{"location":"getting_started/","title":"Getting Started","text":"

This guide offers a simple default configuration for setting up Connaisseur using public infrastructure and verifying your first self-signed images. You will learn how to:

  1. Create signing key pairs
  2. Configure Connaisseur
  3. Deploy Connaisseur
  4. Test Connaisseur (and sign images)
  5. Cleanup

In the tutorial, you can choose to use either Notary (V1) via Docker Content Trust (DCT) or Cosign from the sigstore project as a signing solution referred to as DCT and Cosign from here on. Furthermore we will work with public images on Docker Hub as a container registry and a Kubernetes test cluster which might for example be MicroK8s or minikube for local setups. However, feel free to bring your own solutions for registry or cluster and check out our notes on compatibility.

In general, Connaisseur can be fully configured via charts/connaisseur/values.yaml, so feel free to take a look and try for yourself. For more advanced usage in more complex cases (e.g. authentication, multiple registries, signers, validators, additional features), we strongly advise to review the following pages:

In case you need help, feel free to reach out via GitHub Discussions

Info

As more than only public keys can be used to validate integrity and provenance of an image, we refer to these trust anchors in a generalized form as trust roots.

"},{"location":"getting_started/#requirements","title":"Requirements","text":"

You should have a Kubernetes test cluster running. Furthermore, docker, git, helm and kubectl should be installed and usable, i.e. having run docker login and switched to the appropriate kubectl context.

If you want to contribute to Connaisseur, then you will also need a Golang v1.22 installation.

"},{"location":"getting_started/#create-signing-key-pairs","title":"Create signing key pairs","text":"

Before getting started with Connaisseur, we need to create our signing key pair. This obviously depends on the signing solution. Here, we will walk you through it for DCT and Cosign. In case you have previously worked with Docker Content Trust or Cosign before and already possess key pairs, you can skip this step (how to retrieve a previously created DCT key is described here). Otherwise, pick your preferred signing solution below.

In case you are uncertain which solution to go with, you might be better off to start with DCT, as it comes packaged with docker. Cosign on the other hand is somewhat more straightforward to use.

Docker Content TrustCosign

General usage of DCT is described in the docker documentation. Detailed information on all configuration options for Connaisseur is provided in the Notary (V1) validator section. For now, we just need to generate a public-private root key pair via:

docker trust key generate root\n

You will be prompted for a password, the private key is automatically imported and a root.pub file is created in your current folder that contains your public key which should look similar to:

-----BEGIN PUBLIC KEY-----\nrole: root\n\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAELDzXwqie/P66R3gVpFDWMhxOyol5\nYWD/KWnAaEIcJVTYUR+21NJSZz0yL7KLGrv50H9kHai5WWVsVykOZNoZYQ==\n-----END PUBLIC KEY-----\n

We will only need the actual base64 encoded part of the key later.

Usage of Cosign is very well described in the docs. You can download Cosign from its GitHub repository. Detailed information on all configuration options for Connaisseur is provided in the Cosign validator section. For now, we just need to generate a key pair via:

cosign generate-key-pair\n

You will be prompted to set a password, after which a private (cosign.key) and public (cosign.pub) key are created. In the next step, we will need the public key that should look similar to:

-----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\nqiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n-----END PUBLIC KEY-----\n
"},{"location":"getting_started/#configure-connaisseur","title":"Configure Connaisseur","text":"

Now, we will need to configure Connaisseur. Let's first clone the repository:

git clone https://github.com/sse-secure-systems/connaisseur.git\ncd connaisseur\n

Connaisseur is configured via charts/connaisseur/values.yaml, so we will start there. We need to set Connaisseur to use our previously created public key for validation. To do so, go to the .application.validators and find the default validator. We need to uncomment the trust root with name default and add our previously created public key. The result should look similar to this:

Docker Content TrustCosign charts/connaisseur/values.yaml
# the `default` validator is used if no validator is specified in image policy\n- name: default\n  type: notaryv1  # or other supported validator (e.g. \"cosign\")\n  host: notary.docker.io # configure the notary server to be used\n  trustRoots:\n  # the `default` key is used if no key is specified in image policy\n  - name: default\n    key: |  # enter your key below\n      -----BEGIN PUBLIC KEY-----\n      MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAELDzXwqie/P66R3gVpFDWMhxOyol5\n      YWD/KWnAaEIcJVTYUR+21NJSZz0yL7KLGrv50H9kHai5WWVsVykOZNoZYQ==\n      -----END PUBLIC KEY-----\n  #cert: |  # in case the trust data host is using a self-signed certificate\n  #  -----BEGIN CERTIFICATE-----\n  #  ...\n  #  -----END CERTIFICATE-----\n  #auth:  # credentials in case the trust data requires authentication\n  #  # either (preferred solution)\n  #  secretName: mysecret  # reference a k8s secret in the form required by the validator type (check the docs)\n  #  # or (only for notaryv1 validator)\n  #  username: myuser\n  #  password: mypass\n

In addition for Cosign, the type needs to be set to cosign and the host is not required.

charts/connaisseur/values.yaml
# the `default` validator is used if no validator is specified in image policy\n- name: default\n  type: cosign  # or other supported validator (e.g. \"cosign\")\n  trustRoots:\n  # the `default` key is used if no key is specified in image policy\n  - name: default\n    key: |  # enter your key below\n      -----BEGIN PUBLIC KEY-----\n      MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n      qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n      -----END PUBLIC KEY-----\n  #cert: |  # in case the trust data host is using a self-signed certificate\n  #  -----BEGIN CERTIFICATE-----\n  #  ...\n  #  -----END CERTIFICATE-----\n  #auth:  # credentials in case the trust data requires authentication\n  #  # either (preferred solution)\n  #  secretName: mysecret  # reference a k8s secret in the form required by the validator type (check the docs)\n  #  # or (only for notaryv1 validator)\n  #  username: myuser\n  #  password: mypass\n

We have now configured the validator default with trust root default. This will automatically be used if no validator and trust root is specified in the image policy (.application.policy). Per default, Connaisseur's image policy under .application.policy in charts/connaisseur/values.yaml comes with a pattern \"*:*\" that does not specify a validator or trust root and thus all images that do not meet any of the more specific pre-configured patterns will be verified using this validator. Consequently, we leave the rest untouched in this tutorial, but strongly recommend to read the basics to leverage the full potential of Connaisseur.

"},{"location":"getting_started/#deploy-connaisseur","title":"Deploy Connaisseur","text":"

So let's deploy Connaisseur to the cluster:

helm install connaisseur helm --atomic --create-namespace --namespace connaisseur\n

This can take a few minutes. You should be prompted something like:

NAME: connaisseur\nLAST DEPLOYED: Fri Jul  9 20:43:10 2021\nNAMESPACE: connaisseur\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\n

Afterwards, we can check that Connaisseur is running via kubectl get all -n connaisseur which should look similar to:

NAME                                          READY   STATUS    RESTARTS   AGE\npod/connaisseur-deployment-6876c87c8c-txrkj   1/1     Running   0          2m9s\npod/connaisseur-deployment-6876c87c8c-wvr7q   1/1     Running   0          2m9s\npod/connaisseur-deployment-6876c87c8c-rnc7k   1/1     Running   0          2m9s\n\nNAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE\nservice/connaisseur-svc   ClusterIP   10.152.183.166   <none>        443/TCP   2m10s\n\nNAME                                     READY   UP-TO-DATE   AVAILABLE   AGE\ndeployment.apps/connaisseur-deployment   3/3     3            3           2m9s\n\nNAME                                                DESIRED   CURRENT   READY   AGE\nreplicaset.apps/connaisseur-deployment-6876c87c8c   3         3         3       2m9s\n
"},{"location":"getting_started/#test-connaisseur","title":"Test Connaisseur","text":"

Now that we created our key pairs, configured and deployed Connaisseur, the next step is to test our setup. So let's create and push a test image. Feel free to use our simple test Dockerfile under tests/Dockerfile (make sure to set your own IMAGE name):

# Typically, IMAGE=<REGISTRY>/<REPOSITORY-NAME>/<IMAGE-NAME>:<TAG>, like\nIMAGE=docker.io/securesystemsengineering/demo:test\ndocker build -f tests/Dockerfile -t ${IMAGE} .\ndocker push ${IMAGE}\n

In case you have DCT turned on per default via environment variable DOCKER_CONTENT_TRUST=1, you should disable for now during the docker push by adding the --disable-content-trust=true.

If we try to deploy this unsigned image:

kubectl run test --image=${IMAGE}\n

Connaisseur denies the request due to lack of trust data or signed digest, e.g.:

Error from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request: Unable to get root trust data from default.\n# or\nError from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request: No trust data for image \"docker.io/securesystemsengineering/demo:test\".\n# or\nError from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request: could not find signed digest for image \"docker.io/securesystemsengineering/demo:test\" in trust data.\n

So let's sign the image and try again.

Docker Content TrustCosign

In DCT signing works via docker push using the --disable-content-trust flag:

docker push ${IMAGE} --disable-content-trust=false\n

You will be prompted to provide your password and might be asked to set a new repository key. The trust data will then be pushed to the Docker Hub Notary server.

For Cosign, we use the private key file from the first step:

cosign sign --key cosign.key ${IMAGE}\n

You will be asked to enter your password after wich the signature data will be pushed to your repository.

After successful signing, we try again:

kubectl run test --image=${IMAGE}\n

Now, the request is admitted to the cluster and Kubernetes returns:

pod/test created\n

You did it you just verified your first signed images in your Kuberenetes cluster

Read on to learn how to fully configure Connaisseur

"},{"location":"getting_started/#cleanup","title":"Cleanup","text":"

To uninstall Connaisseur, use:

helm uninstall connaisseur --namespace connaisseur\n

Uninstallation can take a moment as Connaisseur needs to validate the deletion webhook.

"},{"location":"migrating_to_version_3/","title":"Migrate to Connaisseur version 3.0","text":"

It's been a while since our last major update, but it is time again Connaisseur version 3.0 is out and brings along many new features, but also breaking changes For those breaking changes, we've set up a script that migrates your existing Connaisseur configuration. Read on for the list of most interesting changes.

"},{"location":"migrating_to_version_3/#how-to-migrate","title":"How to migrate","text":"
  1. Read the Major changes and API changes sections to get an overview of what's new
  2. Run python3 scripts/upgrade_to_version_3.py
  3. Check the diff of your helm/values.yaml and make sure everything is as expected
  4. Run make upgrade or alternatively helm upgrade connaisseur helm -n <your-namespace> --wait
  5. Enjoy the new version :crossed_fingers:
"},{"location":"migrating_to_version_3/#major-changes","title":"Major changes","text":""},{"location":"migrating_to_version_3/#minor-changes","title":"Minor changes","text":""},{"location":"migrating_to_version_3/#api-changes","title":"API changes","text":"

Here's the list of changes we made to the Helm values.yaml:

"},{"location":"threat_model/","title":"Threat Model","text":"

DEPRECATION WARNING

The threat model is OUTDATED and does not reflect the currently applicable architecture. It still serves as a general guidance to relevant threats.

The STRIDE threat model has been used as a reference for threat modeling. Each of the STRIDE threats were matched to all entities relevant to Connaisseur, including Connaisseur itself. A description of how a threat on an entity manifests itself is given as well as a possible counter measure.

images created by monkik from Noun Project

"},{"location":"threat_model/#1-developeruser","title":"(1) Developer/User","text":"Threat Description Counter Measure Spoofing A developer could be tricked into signing a malicious image, which subsequently will be accepted by Connaisseur. Security Awareness: Developers need to be aware of these attacks, so they can spot any attempts. Elevation of privilege An attacker could acquire the credentials of a developer or trick her into performing malicious actions, hence elevating their privileges to those of the victim. Depending on the victim's privileges, other attacks may be mounted. RBAC & Security Awareness: With proper Role-Based Access Control (RBAC), the effects of compromising an individual's account would help limit its impact and may mitigate the privilege escalation, of course depending on the victim's access level. Other than that, a security awareness training for developers can help minimize the chances of losing critical credentials."},{"location":"threat_model/#2-connaisseur-service","title":"(2) Connaisseur service","text":"Threat Description Counter Measure Spoofing An attacker could stop the original Connaisseur service and start their own version, to take over the admission controller's responsibilities. That way, the functionality of Connaisseur could be completely disabled or altered at will. RBAC: By only permitting a carefully selected group of people to start and stop services in the Connaisseur namespace, such attacks can be prevented. Tampering Given an attacker has access to the Connaisseur container, she could tamper with its source code, leading to forged responses or full compromise. The attacker could also stop the the original Connaisseur process and handle incoming requests some other way, which would be similar to the spoofing threat, but from inside the Connaisseur container. RBAC + Monitoring: Access to the inside of the container can be restricted with RBAC, so an attacker never gets there in the first place. In case the attacker already is inside the container, there are specific monitoring tools (e.g. falco), which are able to register changes inside containers and notify you, should Connaisseur be compromised. Tampering An attacker could modify Connaisseur's image policy to bypass signature verification and inject malicious images. Alternatively, the public root key could be replaced, allowing fake trust data to pass as legit. Lastly, the admission controller could be simply deactivated by deleting the webhook. RBAC + Monitoring: An RBAC system can prevent unauthorized changes to both the image policy and public root key. Additionally, the Connaisseur readiness probe checks the availability of the webhook and will be set to Not Ready should the webhook not be present. Monitoring should still be used to keep track of the admission controller's webhook availability status, as setting up a fake connaisseur-bootstrap-sentinel pod in the connaisseur namespace can bypass that readiness probe check. More on that in an upcoming architectural decision record. Denial of service When sending an extraordinary amount of requests to Connaisseur or triggering unexpected behavior, Connaisseur might become unresponsive or crash. As a result, image signatures can't be verified. Failure Policy: The webhook that is connected to Connaisseur denies all request automatically, should the Connaisseur service be unavailable. Thus, malicious images cannot enter the cluster. Additionally, multiple instances of Connaisseur can be run for better load balancing. Elevation of privilege Since Connaisseur interacts with the Kubernetes API, an attacker located inside the Connaisseur container can act on its behalf and use its permissions. RBAC: Per default, the Connaisseur service account only has read permissions to a few non-critical objects."},{"location":"threat_model/#3-notary-server","title":"(3) Notary server","text":"Threat Description Counter Measure Spoofing An attacker could mount a Monster-in-the-Middle attack between Notary and the Connaisseur service and act as a fake Notary, sending back false trust data. TLS: A TLS connection between Connaisseur and Notary ensures the Notary server's authenticity. Tampering With full control over the Notary server, the stored trust data can be manipulated to include digests of malicious images. Signatures: Changing the trust data would invalidate the signatures and thus fail the image verification. Additionally, the keys needed to create valid signatures are not stored in Notary, but offline on client side. Information disclosure As Notary is responsible for creating the snapshot and timestamp signatures, an attacker could steal those private keys, and create valid snapshot and timestamp signatures. Key rotation: The snapshot and timestamp keys can easily be rotated and changed frequently. The more cirtical root and target key are not stored on server side. Denial of service An extraordinary amount of requests to the Notary server could bring it down so that the Connaisseur service has no more trust data available to work with. Health Probe: Connaisseur's readiness and liveness probes check the Notary server's health every few seconds. Should Notary be unavailable, Connaisseur will switch into a not-ready state. As a consequence, the failure policy will automatically deny all requests."},{"location":"threat_model/#4-registry","title":"(4) Registry","text":"Threat Description Counter Measure Spoofing An attacker could mount a Monster-in-the-Middle attack between the registry and the Kubernetes cluster and act as a fake registry, sending back malicious images. TLS: A TLS connection between the Kubernetes cluster and the registry ensures that the registry is authentic. Tampering With full control over the registry, an attacker may introduce malicious images or change the layers of existing ones and thus inject malicious content. Image Digests: Introducing new images does not work as Connaisseur selects them by digest. An attacker would have to change the content of the corresponding digest layer, while the changes need to produce the same digest. Such a hash collision is considered practically impossible. If digests differ, the docker daemon underlying the cluster will deny the image. Denial of service An extraordinary amount of requests to the registry could bring it down, so that no images can be pulled from it. Out of scope: This threat is specific to registries, not Connaisseur."},{"location":"adr/","title":"Architecture Decision Records","text":"

We strive to make decisions taken during the devlopment of Connaisseur transparent, whenever they may seem weird or unintuitive towards someone new to the project.

Hence, when encountering a problem that took either considerable time to find a solution for or that spawned a lot of discussion, be it internal or from the community, the decision with the factors leading up to the particular choice should be documented. Additionally, we should make clear what other options were under consideration and why they were discarded to help both with making the decision comprehensible to people not involved at the time but also to not repeat discussions at a later point in time.

Since each Architecture Decision may be slightly different, the format is not completely set in stone. However, you should give at least title, status, some context, decisions taken and options discarded and some reasoning as to why one option was deemed better than the others.

"},{"location":"adr/ADR-1_bootstrap-sentinel/","title":"ADR 1: Bootstrap Sentinel","text":""},{"location":"adr/ADR-1_bootstrap-sentinel/#status","title":"Status","text":"

Amended in ADR-3. Deprecated as of ADR-5.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#context","title":"Context","text":"

Connaisseur's main components are a MutatingWebhookConfiguration and the Connaisseur Pods. The MutatingWebhookConfiguration intercepts requests to create or update Kubernetes resources and forwards them to the Connaisseur Pods tasked, on a high level, with verifying trust data. The order of deploying both components matters, since a blocking MutatingWebhookConfiguration without the Connaisseur Pods to answer its requests would also block the deployment of said Pods.

In #3 it was noted that prior to version 1.1.5 of Connaisseur when looking at the Ready status of Connaisseur Pods, they could report Ready while being non-functional due to the MutatingWebhookConfiguration missing. However, as stated above the MutatingWebhookConfiguration can only be deployed after the Connaisseur Pods, which was solved by checking the Ready state of said Pods. If one were to add a dependency to this Ready state, such that it only shows Ready when the MutatingWebhookConfiguration exists, we run into a deadlock, where the MutatingWebhookConfiguration waits for the Pods and the Pods wait for the MutatingWebhookConfiguration.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#considered-options","title":"Considered options","text":""},{"location":"adr/ADR-1_bootstrap-sentinel/#option-1","title":"Option 1","text":"

At the start of the Helm deployment, one can create a Pod named connaisseur-bootstrap-sentinel that will run for 5 minutes (which is also the installation timeout by Helm). Connaisseur Pods will report Ready if they can 1) access notary AND 2) the MutatingWebhookConfiguration exists OR 3) the connaisseur-bootstrap-sentinel Pod is still running. If 1) AND 2) both hold true, the sentinel is killed even if the 5 minutes have not passed yet.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#option-2","title":"Option 2","text":"

Let Connaisseur's Pod readiness stay non-indicative of Connaisseur functioning and advertise that someone running Connaisseur has to monitor the MutatingWebhookConfiguration in order to ensure proper working.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#option-3","title":"Option 3","text":"

Deploy MutatingWebhookConfiguration through Helm when Connaisseur Pods are healthy instead of when ready. Require Pod started and working notary connection for health and require additionally the existence of the MutatingWebhookConfiguration for readiness.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#decision-outcome","title":"Decision outcome","text":"

We chose option 1 over option 2, because it was important to us that a brief glance at Connaisseur's Namespace allows one to judge whether it is running properly. Option 3 was not chosen as the readiness status of Pods can be easily seen from the Service, whereas the health status would require querying every single Pod individually. We deemed that to be a very ugly, non-kubernetes-y solution and hence decided against it.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#positive-consequences","title":"Positive consequences","text":"

If the Connaisseur Pods report Ready during the connaisseur-bootstrap-sentinel's runtime, the MutatingWebhookConfiguration will be deployed by Helm. Otherwise, the Helm deployment will fail after its timeout period (default: 5min), since there won't be a running connaisseur-bootstrap-sentinel Pod anymore that resolves the installation deadlock. The Connaisseur Pods will never reach the Ready state and the MutatingWebhookConfiguration never gets deployed. This means, we get consistent deployment failures after the inital waiting period if something did not work out. Additionally, if the MutatingWebhookConfiguration gets removed for whatever reason during operation, Connaisseur Pods will be failing, indicating their failed dependency. Hence, monitoring the Connaisseur Pods is sufficient to ensure their working.

"},{"location":"adr/ADR-1_bootstrap-sentinel/#negative-consequences","title":"Negative consequences","text":"

On the other hand, if an adversary can deploy a Pod named connaisseur-bootstrap-sentinel to Connaisseur's Namespace, the Connaisseur Pods will always show Ready regardless of the MutatingWebhookConfiguration. However, if an adversary can deploy to Connaisseur's Namespace, chances are Connaisseur can be compromised anyways. More importantly, if not a single Connaisseur Pod is successfully deployed or if the notary healthcheck fails during the sentinel's lifetime, then the deployment will fail regardless of possible recovery at a later time. Another issue would be the connaisseur-bootstrap-sentinel Pod being left behind, however since it has a very limited use case we can also clean it up during the deployment, so apart from the minimal additional complexity of the deployment this is a non-issue.

"},{"location":"adr/ADR-2_release-management/","title":"ADR 2: Release Management","text":""},{"location":"adr/ADR-2_release-management/#status","title":"Status","text":"

Proposed

"},{"location":"adr/ADR-2_release-management/#context","title":"Context","text":"

During its initial development Connaisseur was more or less maintained by a single person and not released frequently. Hence, the easiest option was to just have the maintainer build and push at certain stages of development. With the influx of more team members, the number of contributions and hence the number of needed/reasonable releases went up. Also since publication, it is more important that the uploaded Connaisseur image corresponds to the most recent version referenced in the Helm chart.

A single person having to build, sign and push the images whenever a new pull request is accepted is hence unpractical for both development and agility.

"},{"location":"adr/ADR-2_release-management/#considered-options","title":"Considered options","text":""},{"location":"adr/ADR-2_release-management/#choice-1","title":"Choice 1","text":"

What branches to maintain

"},{"location":"adr/ADR-2_release-management/#option-1","title":"Option 1","text":"

Continue with PRs from personal feature branches to master.

"},{"location":"adr/ADR-2_release-management/#option-2","title":"Option 2","text":"

Have a development branch against which to create pull requests (during usual development, hotfixes may be different).

Sub-options: - a develop (or similar) branch that will exist continuously - a v.1.5.0_dev (or similar) branch for each respective version

"},{"location":"adr/ADR-2_release-management/#choice-2","title":"Choice 2","text":"

Where to sign the images

"},{"location":"adr/ADR-2_release-management/#option-1_1","title":"Option 1","text":"

Have the pipeline build, sign and push the images.

"},{"location":"adr/ADR-2_release-management/#option-2_1","title":"Option 2","text":"

Have a maintainer build, sign and push the images.

"},{"location":"adr/ADR-2_release-management/#decision-outcome","title":"Decision outcome","text":"

For choice 1, we decided to go for two branches. On the one hand, master being the branch that contains the code of the latest release and will be tagged with release versions. On the other hand, there will be a develop branch that hosts the current state of development and will be merged to master whenever we want to create a new release.

This way we get rid of the current pain of releasing with every pull request at the cost a some overhead during release.

In the process of automating most of the release process, we will run an integration test with locally built images for pull requests to master. Regarding choice 2, whenever a pull request is merged, whoever merged the PR has to tag this commit on the master branch with the most recent version. Right after the merge, whoever merged the PR builds, signs and pushes the new Connaisseur release and creates a tag on the master branch referencing the new release version.

After the image is pushed and the new commit tagged, the pipeline will run the integration test with the image pulled from Docker Hub to ensure that the released version is working.

We decided for this option as it does not expose credentials to GitHub Actions, which we wanted to avoid especially in light of the recent GitHub Actions injection attacks and as it would also prevent us from opening up the repository to Pull Requests. To alleviate the work required for doing the steps outside the pipeline we use a shell script that will automate these steps given suitable environment, i.e. Docker context and DCT keys.

"},{"location":"adr/ADR-2_release-management/#positive-consequences","title":"Positive consequences","text":""},{"location":"adr/ADR-2_release-management/#negative-consequences","title":"Negative consequences","text":""},{"location":"adr/ADR-3_multi_notary_config/","title":"ADR 3: Multiple Notary Configuration","text":""},{"location":"adr/ADR-3_multi_notary_config/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-3_multi_notary_config/#context","title":"Context","text":"

Previously Connaisseur only supported the configuration of a single notary, where all signature data had to reside in. Unfortunately this is rather impractical, as one doesn't create all signatures for all images one uses in a cluster. There is a need to access signature data from multiple places, like in a setup where most images come from a private registry + notary and some from Docker Hub and their notary.

There is also the problem that a single notary instance could use multiple root keys, used for creating the signatures, like in the case of Docker Hub. Connaisseur also only supports a single root key to be trust pinned, thus making it impractical.

That's why the decision was made to support more than one notary and multiple keys per notary, which leads to the question how the new configuration should look like. This also has implications on the notary health check, which is important for Connaisseur's own readiness check.

"},{"location":"adr/ADR-3_multi_notary_config/#considered-options","title":"Considered options","text":""},{"location":"adr/ADR-3_multi_notary_config/#choice-1","title":"Choice 1","text":"

The overall notary configuration setup in charts/connaisseur/values.yaml.

"},{"location":"adr/ADR-3_multi_notary_config/#option-1-per-notary","title":"Option 1 (Per Notary)","text":"

The notary field becomes a list and changes to notaries. Per to be used notary instance, there will be one entry in this list.

The entry will have the following data fields (bold are mandatory):

The image policy will have two additional fields per rule entry (in \"quotes\" are already present fields):

"},{"location":"adr/ADR-3_multi_notary_config/#option-2-per-notary-key","title":"Option 2 (Per Notary + Key)","text":"

The notary field becomes a list and changes to notaries. Per notary + public root key combination, there is one entry. Meaning, for example, there will be one entry for Docker Hub and the public key for all official images and there will be another entry for Docker Hub and the public key for some private images.

The entries will look identical to the one's from option 1, with two exceptions.

  1. The pub_root_keys field of the notary configurations won't be a list and only has a single entry, without needing to specify a key name.

  2. The image policy will only address the notary configuration to be chosen with the notary field, without the need for a key field.

"},{"location":"adr/ADR-3_multi_notary_config/#choice-2","title":"Choice 2","text":"

Default values for notary (and key) inside the image policy.

"},{"location":"adr/ADR-3_multi_notary_config/#option-1-first-item","title":"Option 1 (First item)","text":"

When no notary is specified in a image policy rule, the first entry in the notaries configuration list is taken. The same goes for the public root key list, should option 1 for choice 1 be chosen.

Problem: Might get inconsistent, should list ordering in python get shuffled around

"},{"location":"adr/ADR-3_multi_notary_config/#option-2-explicit-default","title":"Option 2 (Explicit default)","text":"

One of the notary configuration will be given a default field, which marks it as the default value.

Problem: No real problems here, just an extra field that the user has to care about.

"},{"location":"adr/ADR-3_multi_notary_config/#option-3-mandatory-notary","title":"Option 3 (Mandatory Notary)","text":"

The notary (and potentially key) field is mandatory for the image policy.

Problem: Creates configuration overhead if many image policies use the same notary/key combination.

"},{"location":"adr/ADR-3_multi_notary_config/#option-4-default-name","title":"Option 4 (Default name)","text":"

If no notary or key are given in the image policy, it is assumed that one of the elements in the notary list or key list has name: \"default\", which will then be taken. Should the assumption be wrong, an error is raised.

"},{"location":"adr/ADR-3_multi_notary_config/#choice-3","title":"Choice 3","text":"

Previously, the readiness probe for connaisseur also considered the notary's health for its own status. With multiple notary instances configured, this behavior changes.

"},{"location":"adr/ADR-3_multi_notary_config/#option-1-ignore-notary","title":"Option 1 (Ignore Notary)","text":"

The readiness probe of Connaisseur will no longer be dependent on any notary health checks. The are completely decoupled.

Problem: No knowledge that Connaisseur will automatically fail because of an unreachable notary, before one tries to deploy an image.

"},{"location":"adr/ADR-3_multi_notary_config/#option-2-health-check-on-all","title":"Option 2 (Health check on all)","text":"

In order for connaisseur to be ready, all configured notaries must be healthy and reachable.

Problem: A single unreachable notary will \"disable\" Connaisseur's access to all others.

"},{"location":"adr/ADR-3_multi_notary_config/#option-3-log-notary-status","title":"Option 3 (Log Notary status)","text":"

A mix of option 1 and 2, whereas the readiness of Connaisseur is independent of the notaries health check, but they are still being made, so unhealthy notaries can be logged.

Problem: At what interval should be logged?

"},{"location":"adr/ADR-3_multi_notary_config/#decision-outcome","title":"Decision outcome","text":""},{"location":"adr/ADR-3_multi_notary_config/#choice-1_1","title":"Choice 1","text":"

Option 1 was chosen, to keep configurational duplication at a minimum.

"},{"location":"adr/ADR-3_multi_notary_config/#choice-2_1","title":"Choice 2","text":"

Option 4 was chosen. If more than one notary configuration or key within a configuration are present, one of those can be called \"default\" (setting the name field). That way it should be obvious enough, which configuration or key will be used, if not further specified within the image policy, while keeping configuration effort low.

"},{"location":"adr/ADR-3_multi_notary_config/#choice-3_1","title":"Choice 3","text":"

Option 3 was chosen. Notary and Connaisseur will be completely decoupled, with Connaisseur logging all notaries it can't reach. This way Connaisseur can still be operational, even with all notaries being unreachable. Otherwise Connaisseur would have blocked even images that were allowlisted. This is a breaking change, but we agreed that it is better as it allows e.g. deployments for which the respective image policy specifies verify: false.

"},{"location":"adr/ADR-4_modular/","title":"ADR 4: Modular Validation","text":""},{"location":"adr/ADR-4_modular/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-4_modular/#context","title":"Context","text":"

With the upcoming of notaryv2 and similar projects like Cosign the opportunity for Connaisseur arises to support multiple signing mechanisms, and combine all into a single validation tool. For that to work, the internal validation mechanism of connaisseur needs to be more modular, so we can easily swap in and out different methods.

"},{"location":"adr/ADR-4_modular/#considered-options","title":"Considered options","text":""},{"location":"adr/ADR-4_modular/#configuration-changes-choice-1","title":"Configuration changes (Choice 1)","text":"

Obviously some changes have to be made to the configuration of Connaisseur, but this splits into changes for the previous notary configurations and the image policy.

"},{"location":"adr/ADR-4_modular/#notary-configuration-11","title":"\"Notary\" configuration (1.1)","text":"

With notaryv1 all trust data always resided in a notary server for which Connaisseur needed the URL, authentication credentials, etc. This isn't true anymore for notaryv2 or Cosign. Here Connaisseur may need other data, meaning the configuration is dependent on the type of validation method used here. Also other mechanisms such as digest whitelisting which doesn't even include cryptographic material might be considered in the future.

"},{"location":"adr/ADR-4_modular/#111-structure","title":"1.1.1 Structure","text":""},{"location":"adr/ADR-4_modular/#option-1111","title":"Option 1.1.1.1","text":"

The previous notaries section in the values.yaml changes to validators, in which different validation methods (validators) can be defined. The least required fields a validator needs are a name for later referencing and a type for knowing its correct kind.

validators:\n- name: \"dockerhub-nv2\"\n  type: \"notaryv2\"\n  ...\n- name: \"harbor-nv1\"\n  type: \"notaryv1\"\n  host: \"notary.harbor.io\"\n  root_keys:\n    - name: \"default\"\n      key: \"...\"\n- name: \"cosign\"\n  type: \"cosign\"\n  ...\n

Depending on the type, additional fields might be required, e.g. the notaryv1 type requires a host and root_keys field.

NB: JSON schema validation works for the above and can easily handle various configurations based on type in there.

"},{"location":"adr/ADR-4_modular/#decision","title":"Decision","text":"

We are going with this structure (option 1.1.1.1) due to the lack of other alternatives. It provides all needed information and the flexibility to use multiple validation methods, as needed.

"},{"location":"adr/ADR-4_modular/#112-sensitive-values","title":"1.1.2 Sensitive values","text":"

If we allow multiple validators that may contain different forms of sensitive values, i.e. notary credentials, symmetric keys, service principals, ..., they need to be properly handled within the Helm chart with respect to ConfigMaps and Secrets. Currently, the distinction is hard-coded.

"},{"location":"adr/ADR-4_modular/#option-1121","title":"Option 1.1.2.1","text":"

Add an optional sensitive([-_]fields) field at the validator config top level. Any sensitive values go in there and will be handled by the Helm chart to go into a secret. Any other values are treated as public and go into the ConfigMap.

Advantages: - Generic configuration - Could be used by potential plugin validators to have their data properly handled (potential future) - Hard to forget the configuration for newly implemented validators

Disadvantage: If implemented in a config = merge(secret, configmap) way, might allow sensitive values in configmap and Connaisseur still working

"},{"location":"adr/ADR-4_modular/#option-1122","title":"Option 1.1.2.2","text":"

Hard-code sensitive values based on validator type

Advantages: Can do very strict validation on fields without extra work

Disadvantages: - Helm chart change might be forgotten for new validator - Helm chart release required for new validator - Does not \"natively\" allow plugins

"},{"location":"adr/ADR-4_modular/#decision_1","title":"Decision","text":"

We are going with option 1.1.2.2 and hard code the sensitive fields, to prevent users from misconfigure and accidentally but sensitive parts into configmaps.

"},{"location":"adr/ADR-4_modular/#image-policy-12","title":"Image policy (1.2)","text":"

For the image policy similar changes to the notary configuration have to be made.

"},{"location":"adr/ADR-4_modular/#proposition","title":"Proposition","text":"

The previous notary field in the image policy will be changed to validator, referencing a name field of one item in the validators list. Any additional fields, e.g. required delegation roles for a notaryv1 validator will be given in a with field. This will look similar to this:

policy:\n- pattern: \"docker.harbor.io/*:*\"\n  validator: \"harbor-nv1\"\n  with:\n    key: \"default\"\n    delegations:\n    - lou\n    - max\n- pattern: \"docker.io/*:*\"\n  validator: \"dockerhub-nv2\"\n
"},{"location":"adr/ADR-4_modular/#option-1211","title":"Option 1.2.1.1","text":"

Besides the self configured validator, two additional validators will be available: allow and deny. The allow validator will allow any image and the deny validator will deny anything.

Advantages: More powerful than verify flag, i.e. has explicit deny option.

Disadvantages: More config changes for users

"},{"location":"adr/ADR-4_modular/#option-1212","title":"Option 1.2.1.2","text":"

Stick with current verify flag.

Advantages: Config known for current users

Disadvantages: No explicit deny option

"},{"location":"adr/ADR-4_modular/#decision_2","title":"Decision","text":"

We are going with option 1.2.1.1, as we don't have to use additional fields and offer more powerful configuration options.

"},{"location":"adr/ADR-4_modular/#option-1221","title":"Option 1.2.2.1","text":"

When no validator given, default to deny validator.

Advantages: Easy

Disadvantages: Not explicit

"},{"location":"adr/ADR-4_modular/#option-1222","title":"Option 1.2.2.2","text":"

Require validator in policy config.

Advantages: Explicit configuration, no accidental denying images

Disadvantages: ?

"},{"location":"adr/ADR-4_modular/#decision_3","title":"Decision","text":"

We are going with option 1.2.2.1 as it reduces configurational effort and is consistent with the key selection behavior.

"},{"location":"adr/ADR-4_modular/#option-1231","title":"Option 1.2.3.1","text":"

The validators from option 1.2.1.1 (allow and deny) will be purely internal, and additional validator can not be named \"allow\" or \"deny\".

Advantages: Less configurational effort

Disadvantage: A bit obscure for users

"},{"location":"adr/ADR-4_modular/#option-1232","title":"Option 1.2.3.2","text":"

The allow and deny validator will be added to the default configuration as type: static with an extra argument (name up for discussion) that specifies whether everything should be denied or allowed. E.g.:

validators:\n- name: allow\n  type: static\n  approve: true\n- name: deny\n  type: static\n  approve: false\n- ...\n

Advantages: No obscurity, if user don't need these they can delete them.

Disadvantage: Bigger config file ...?

"},{"location":"adr/ADR-4_modular/#decision_4","title":"Decision","text":"

We are going with option 1.2.3.2 as we favor less obscurity over the \"bigger\" configurational \"effort\".

"},{"location":"adr/ADR-4_modular/#validator-interface-choice-2","title":"Validator interface (Choice 2)","text":"

See validator interface

Should validation return JSON patch or digest?

"},{"location":"adr/ADR-4_modular/#option-211","title":"Option 2.1.1","text":"

Validator.validate creates a JSON patch for the k8s request. Hence, different validators might make changes in addition to transforming tag to digest.

Advantages: More flexibility in the future

Disadvantages: We open the door to changes that are not core to Connaisseur functionality

"},{"location":"adr/ADR-4_modular/#option-212","title":"Option 2.1.2","text":"

Validator.validate returns a digest and Connaisseur uses the digest in a \"standardized\" way to create a JSON patch for the k8s request.

Advantage: No code duplication and we stay with core feature of translating input data to trusted digest

Disadvantages: Allowing additional changes would require additional work if we wanted to allow them in the future

"},{"location":"adr/ADR-4_modular/#decision_5","title":"Decision","text":"

We are going with option 2.1.2 as all current and upcoming validation methods return a digest.

"},{"location":"adr/ADR-5_no-more-bootstrap/","title":"ADR 5: No More Bootstrap Pods","text":""},{"location":"adr/ADR-5_no-more-bootstrap/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-5_no-more-bootstrap/#context","title":"Context","text":"

Installing Connaisseur isn't as simple as one might think. There is more to it then just applying some yaml files, all due to the nature of being an admission controller, which might block itself in various ways. This ADR depicts some issues during installation of Connaisseur and shows solutions, that try make the process simpler and easier to understand.

"},{"location":"adr/ADR-5_no-more-bootstrap/#problem-1-installation-order","title":"Problem 1 - Installation order","text":"

Connaisseur's installation order is fairly critical. The webhook responsible for intercepting all requests is dependent on the Connaisseur pods and can only work, if those pods are available and ready. If not and FailurePolicy is set to Fail, the webhook will block anything and everything, including the Connaisseur pods themselves. This means, the webhook must be installed after the Connaisseur pods are ready. This was previously solved using the post-install Helm hook, which installs the webhook configuration after all other resources have been applied and are considered ready. Just for installation purposes, this solution suffices. A downside of this is, every resource installed via a Helm hook isn't natively considered to be part of the chart, meaning a helm uninstall would completely ignore those resources and leave the webhook configuration in place. Then the situation of everything and anything being blocked arises again. Additionally, upgrading won't be possible, since you can't tell Helm to temporarily delete resources and then reapply them. That's why the helm-hook image and bootstrap-sentinel where introduced. They were used to temporarily delete the webhook and reapply it before and after installations, in order to beat the race conditions. Unfortunately, this solution always felt a bit clunky and added a lot of complexity for a seemingly simple problem.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-11-empty-webhook-as-part-of-helm-release","title":"Solution 1.1 - Empty webhook as part of Helm release","text":"

The bootstrap sentinel and helm-hook image won't be used anymore. Instead, an empty webhook configuration (a configuration without any rules) will be applied along all other resources during the normal Helm installation phase. This way the webhook can be normally deleted with the helm uninstall command. Additionally, during the post-install (and post-upgrade/post-rollback) Helm hook, the webhook will be updated so it can actually intercept incoming request. So in a sense an unloaded webhook gets installed, which then gets \"armed\" during post-install. This also works during an upgrade, since the now \"armed\" webhook will be overwritten by the empty one when trying to apply the chart again! This will obviously be reverted back again after upgrading, with a post-upgrade Helm hook.

Pros: Less clunky and more k8s native. Cons: Connaisseur will be deactivated for a short time during upgrading.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-12-bootstrap-sentinel-and-helm-hook","title":"Solution 1.2 - Bootstrap Sentinel and Helm hook","text":"

Everything stays as is! The Helm hook image is still used to (un)install the webhook, while the bootstrap sentinel is there to mark the Connaisseur pods as ready for initial installation.

Pros: Never change a running system. Cons: Clunky, at times confusing for anyone not familiar with the Connaisseur installation order problem, inactive webhook during upgrade.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-13-uninstallation-of-webhook-during-helm-hooks","title":"Solution 1.3 - (Un)installation of webhook during Helm hooks","text":"

The webhook can be easily installed during the post-install step of the Helm installation, but then it isn't part of the Helm release and can't be uninstalled, as mentioned above. With a neat little trick this is still possible: in the post-delete step the webhook can be reapplied in an empty (\"unarmed\") form, while setting the hook-delete-policy to delete the resource in either way (no matter if the Helm hook step fails or not). So in a way the webhook gets reapplied and then immediately deleted. This still works with upgrading Connaisseur if a rolling update strategy is pursued, meaning the old pods will still be available for admitting the new ones, while with more and more new pods being ready, the old ones get deleted.

Pros: Less clunky and more k8s native, no inactivity of the webhook during upgrade. Cons: Slower upgrade of Connaisseur compared to solution 1.

"},{"location":"adr/ADR-5_no-more-bootstrap/#decision-outcome-1","title":"Decision outcome (1)","text":"

Solution 1.3 was chosen, as it is the more Kubernetes native way of doing things and Connaisseur will be always available, even during its own upgrade.

"},{"location":"adr/ADR-5_no-more-bootstrap/#problem-2","title":"Problem 2","text":"

All admission webhooks must use TLS for communication purposes or they won't be accepted by Kubernetes. That is why Connaisseur creates its own self signed certificate, which it uses for communication between the webhook and its pods. This certificate is created within the Helm chart, using the native genSelfSignedCert function, which makes Connaisseur pipeline friendly as there is no need for additional package installation such as OpenSSL. Unfortunately, this certificate gets created every time Helm is used, whether that being a helm install or helm upgrade. Especially during an upgrade, the webhook will get a new certificate, while the pods will get their new one written into a secret. The problem is that the pods will only capitalize on the new certificate inside the secret once they are restarted. If no restart happens, the pods and webhook will have different certificates and any validation will fail.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-21-lookup","title":"Solution 2.1 - Lookup","text":"

Instead of always generating a new certificate, the lookup function for Helm templates could be used to see whether there already is a secret defined that contains a certificate and then use this one. This way the same certificate is reused the whole time so no pod restarts are necessary. Should there be no secret with certificate to begin with, a new one can be generated within the Helm chart.

Pros: No need for restarts and changing of TLS certificates. Cons: The lookup function takes some time to gather the current certs.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-22-restart","title":"Solution 2.2 - Restart","text":"

On each upgrade of the Helm release, all pods will be restarted so they incorporate the new TLS secrets.

Pros: - Cons: Restarting takes time and may break if too many Connaisseur pods are unavailable at the same time.

"},{"location":"adr/ADR-5_no-more-bootstrap/#solution-23-external-tls","title":"Solution 2.3 - External TLS","text":"

Go back to using an external TLS certificate which is not being generated within the Helm chart, but by pre-configuring it or using OpenSSL.

Pros: Fastest solution. Cons: More configurational effort and/or not pipeline friendly (may need OpenSSL).

"},{"location":"adr/ADR-5_no-more-bootstrap/#decision-outcome-2","title":"Decision outcome (2)","text":"

Solution 2.1 is being implemented, as it is important that Connaisseur works with as little configuration effort as possible from the get-go. Nonetheless an external configuration of TLS certificates is still considered for later development.

--

"},{"location":"adr/ADR-6_dynamic-config/","title":"ADR 6: Dynamic Configuration","text":""},{"location":"adr/ADR-6_dynamic-config/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-6_dynamic-config/#context","title":"Context","text":"

The configuration of validators are mounted into Connaisseur as a configmap, as it is common practice in the Kubernetes ecosystem. When this configmap is upgraded, say with a helm upgrade, the resource itself in Kubernetes is updated accordingly, but that doesn't mean it's automatically updated inside the pods which mounted it. That only occurs once the pods are restarted and until they are the pods still have an old version of the configuration lingering around. This is a fairly unintuitive behavior and the reason why Connaisseur doesn't mount the image policy into its pods. Instead, the pods have access to the kube API and get the image policy dynamically from there. The same could be done for the validator configuration, but there is also another solution.

"},{"location":"adr/ADR-6_dynamic-config/#problem-1-access-to-configuration","title":"Problem 1 - Access to configuration","text":"

How should Connaisseur get access to its configuration files?

"},{"location":"adr/ADR-6_dynamic-config/#solution-11-dynamic-access","title":"Solution 1.1 - Dynamic access","text":"

This is the same solution as currently employed for the image policy configuration. The validators will get their own CustomResourceDefinition and Connaisseur gets access to this resource via RBAC so it can use the kube API to read the configuration.

Pros: Pods don't need to be restarted and the configuration can be changed \"on the fly\", without using Helm. Cons: Not a very Kubernetes native approach and Connaisseur must always do some network requests to access its config.

"},{"location":"adr/ADR-6_dynamic-config/#solution-12-restart-pods","title":"Solution 1.2 - Restart pods","text":"

The other solution would be to use ConfigMaps for validators and image policy and then restart the pods, once there were changes in the configurations. This can be achieved by setting the hash of the config files as annotations into the deployment. If there are changes in the configuration, the hash will change and thus a new deployment will be rolled out as it has a new annotation. This corresponds to the suggestion made by Helm.

Pros: Kubernetes native and no more CustomResourceDefinitions! Cons: No more \"on the fly\" changes.

"},{"location":"adr/ADR-6_dynamic-config/#decision-outcome-1","title":"Decision Outcome (1)","text":"

Solution 1.2 was chosen, going with the more Kubernetes native way.

"},{"location":"adr/ADR-6_dynamic-config/#problem-2-how-many-configmaps-are-too-many","title":"Problem 2 - How many configmaps are too many?","text":"

When both the image policy and validator configurations are either CustomResourceDefinitions or ConfigMaps, is there still a need to separate them or can they be merged into one file?

"},{"location":"adr/ADR-6_dynamic-config/#solution-21-2-concerns-2-resources","title":"Solution 2.1 - 2 concerns, 2 resources","text":"

There will be 2 resources, one for the image policy and one for the validators.

"},{"location":"adr/ADR-6_dynamic-config/#solution-22-one-to-rule-them-all","title":"Solution 2.2 - One to rule them all","text":"

One Ring to rule them all, One Ring to find them, One Ring to bring them all and in the darkness bind them.

"},{"location":"adr/ADR-6_dynamic-config/#decision-outcome-2","title":"Decision Outcome (2)","text":"

Solution 2.2 was chosen as it is the more simpler of the two.

"},{"location":"adr/ADR-7_wsgi-server/","title":"ADR 7: WSGI Server","text":""},{"location":"adr/ADR-7_wsgi-server/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-7_wsgi-server/#context","title":"Context","text":"

We were running the Flask WSGI application with the built-in Flask server, which is not meant for production. Problems are mainly due to potential debug shell on the server and single thread in default configuration. Both were mitigated in our setup, but we decided to test a proper WSGI server at some point. Especially the log entry

 * Serving Flask app 'connaisseur.flask_server' (lazy loading)\n * Environment: production\n   WARNING: This is a development server. Do not use it in a production deployment.\n   Use a production WSGI server instead.\n
did cause anguish among users, see e.g. issue 11.

"},{"location":"adr/ADR-7_wsgi-server/#considered-options","title":"Considered options","text":""},{"location":"adr/ADR-7_wsgi-server/#choice-1-wsgi-server","title":"Choice 1: WSGI server","text":"

There's plenty of WSGI server around and the question poses itself, which one to pick. Flask itself has a list of servers, there's comparisons around, for example here and here. The choice, which WSGI servers to test was somewhat arbitrary among better performing ones in the posts.

Contenders were Bjoern, Cheroot, Flask, Gunicorn and uWSGI. Bjoern was immediately dropped, since it worked only with Python2. Later, during testing Bjoern did support Python3, but no TLS, so we stuck to dropping it. Gunicorn was tested for a bit, but since it delivered worse results than the others and it requires a writable worker-tmp-dir directory, it was also dropped from contention.

The remaining three were tested over a rather long time of development, i.e. from before the first bit of validation parallelization to after the 2.0 release. All tests were run on local minikube/kind clusters with rather constrained resources in the expectation that this will still provide reasonable insight into the servers' behavior on regular production clusters.

"},{"location":"adr/ADR-7_wsgi-server/#test-results","title":"Test results","text":"

Since the results span a longer timeframe and at least at first performed to find some way to distinguish the servers instead of having a clear plan, some tests feature a different configuration. If not specified different Cheroot was run with default configuration (minimum number of threads 10, no maximum limit), Flask in its default configuration and uWSGI with 2 processes and 1 thread (low because it already has a bigger footprint when idle to begin with). Connaisseur itself was configured with its default of 3 pods.

"},{"location":"adr/ADR-7_wsgi-server/#integration-test","title":"Integration test","text":""},{"location":"adr/ADR-7_wsgi-server/#before-parallelization","title":"Before parallelization","text":"

Before paralellization was ever implemented, there were tests running the integration test on the cluster and seeing how often the test failed.

The error rate across 50 executions was 8% (4/50) for Cheroot, 22% (11/50) for Flask and 12% (6/50) for uWSGI. These error rates could be as high because the non-parallelized fetching of notary trust data regularly took around 25 seconds with a maximum timeout of 30 seconds.

"},{"location":"adr/ADR-7_wsgi-server/#with-simple-parallelization","title":"With simple parallelization","text":"

After parallelization (of fetching base trust data) was added, the tests were rerun. This time all 50 checks for all servers were run together with randomized order of servers for each of the 50 test runs.

Error rates were 4% (2/50) for Cheroot and 6% (3/50) for uWSGI. Flask was not tested.

"},{"location":"adr/ADR-7_wsgi-server/#stress-tests","title":"Stress tests","text":""},{"location":"adr/ADR-7_wsgi-server/#complex-requests","title":"Complex requests","text":"

There was a test setup with complex individual requests containing multiple different initContainers and containers or many instantiations of a particular image.

The test was performed using kubectl apply -f loadtest.yaml on the below file.

loadtest.yaml
\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: redis-with-many-instances\n  labels:\n    app: redis\n    loadtest: loadtest\nspec:\n  selector:\n    matchLabels:\n      app: redis\n  replicas: 1000\n  template:\n    metadata:\n      labels:\n        app: redis\n    spec:\n      containers:\n      - name: redis\n        image: redis\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n  name: pod-with-many-containers\n  labels:\n    loadtest: loadtest\nspec:\n  containers:\n  - name: container1\n    image: busybox\n    command: ['sh', '-c', 'sleep 3600']\n  - name: container2\n    image: redis\n  - name: container3\n    image: node\n  - name: container4\n    image: nginx\n  - name: container5\n    image: rabbitmq\n  - name: container6\n    image: elasticsearch\n  - name: container7\n    image: sonarqube\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n  name: pod-with-many-containers-and-init-containers\n  labels:\n    loadtest: loadtest\nspec:\n  containers:\n  - name: container1\n    image: busybox\n    command: ['sh', '-c', 'sleep 3600']\n  - name: container2\n    image: redis\n  - name: container3\n    image: node\n  - name: container4\n    image: nginx\n  - name: container5\n    image: rabbitmq\n  - name: container6\n    image: elasticsearch\n  - name: container7\n    image: sonarqube\n  initContainers:\n  - name: init2\n    image: maven\n  - name: init3\n    image: vault\n  - name: init4\n    image: postgres\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n  name: pod-with-some-containers-and-init-containers\n  labels:\n    loadtest: loadtest\nspec:\n  containers:\n  - name: container1\n    image: busybox\n    command: ['sh', '-c', 'sleep 3600']\n  - name: container2\n    image: redis\n  - name: container3\n    image: node\n  - name: container4\n    image: nginx\n  initContainers:\n  - name: container5\n    image: rabbitmq\n  - name: container6\n    image: elasticsearch\n  - name: container7\n    image: sonarqube\n\n---\n\napiVersion: v1\nkind: Pod\nmetadata:\n  name: pod-with-coinciding-containers-and-init-containers\n  labels:\n    loadtest: loadtest\nspec:\n  containers:\n  - name: container1\n    image: busybox\n    command: ['sh', '-c', 'sleep 3600']\n  - name: container2\n    image: redis\n  - name: container3\n    image: node\n  initContainers:\n  - name: init1\n    image: busybox\n    command: ['sh', '-c', 'sleep 3600']\n  - name: init2\n    image: redis\n  - name: init3\n    image: node\n

None of the servers regularly managed to pass this particular loadtest. However, the pods powered by the Flask server regularly died and had to be restarted, whereas both Cheroot and uWSGI had nearly no restarts and never on all instances. uWSGI seldomly even managed to pass the test.

"},{"location":"adr/ADR-7_wsgi-server/#less-complex-requests-with-some-load","title":"Less complex requests with some load","text":"

Since in the above the most complex request was the bottleneck, we tried an instance of the test with less complexity in the individual requests but more requests instead. However, that led to no real distinguishing behaviour across the servers.

"},{"location":"adr/ADR-7_wsgi-server/#load-test","title":"Load test","text":"

To check the servers behaviour when hit with lots of (easy) requests at the same time, we also implemented an actual load test. We ran parallel --jobs 20 ./testn.sh {1} :::: <(seq 200) and parallel --jobs 50 ./testn.sh {1} :::: <(seq 200) with the below files.

File contents testn.sh
\nnr=$1\n\ntmpf=$(mktemp)\nfilec=$(nr=${nr} envsubst ${tmpf})\n\nkubectl apply -f ${tmpf}\n\nloadtest3.yaml\n
\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: redis-${nr}\n  labels:\n    app: redis\n    loadtest: loadtest\nspec:\n  selector:\n    matchLabels:\n      app: redis\n  replicas: 1\n  template:\n    metadata:\n      labels:\n        app: redis\n    spec:\n      containers:\n      - name: redis\n        image: redis\n
\n\n\n

Afterwards, we checked how many of the pods were actually created.

\n\n\n\nServer\nCreated pods (parallel 20 jobs)\nCreated pods (parallel 50 jobs)\n\n\n\n\nCheroot\n173\n78\n\n\nCheroot (numthreads=40)\n-\n81\n\n\nFlask\n173\n81\n\n\nuWSGI\n49\n-\n\n\nuWSGI (1 process, 10 threads)\n164\n35\n\n\nuWSGI (4 processes, 10 threads)\n146\n135\n\n\nuWSGI (1 process, 40 threads)\n164\n112\n\n\n\n

Interestingly, Flask (narrowly) performs best for this test (for strong load, not for massive load) and for both Cheroot and uWSGI adding further parallelization doesn't necessarily help the stability even when intuitively it should. For 50 jobs in parallel the low creation rate is due to the pods dying at some point during the barrage.

\n

Resource consumption measured via kubectl top pods -n connaisseur during the loadtest:

\n

Shown is representative sample from across multiple invocations only at 20 jobs, since for 50 jobs most often the pods died and metrics API is slow to give accurate information after restart.

\n

Cheroot\n

NAME                                      CPU(cores)   MEMORY(bytes)\nconnaisseur-deployment-644458d686-2tfjp   331m         46Mi\nconnaisseur-deployment-644458d686-kfzdq   209m         44Mi\nconnaisseur-deployment-644458d686-t57lp   321m         53Mi\n

\n

Flask\n

NAME                                      CPU(cores)   MEMORY(bytes)\nconnaisseur-deployment-644458d686-t6c24   381m         42Mi\nconnaisseur-deployment-644458d686-thgzd   328m         42Mi\nconnaisseur-deployment-644458d686-wcprp   235m         38Mi\n

\n

uWSGI (1 process, 10 threads)\n

NAME                                     CPU(cores)   MEMORY(bytes)\nconnaisseur-deployment-d86fbfcd8-9c5m7   129m         63Mi\nconnaisseur-deployment-d86fbfcd8-hv6sp   309m         67Mi\nconnaisseur-deployment-d86fbfcd8-w46dz   298m         67Mi\n

"},{"location":"adr/ADR-7_wsgi-server/#option-11-flask","title":"Option 1.1: Flask","text":"

Staying with the Flask server is obviously an option. It doesn't resolve the problem, but it did us a good service and there's no known problems with its usage in practice.

\n

However, the authors discourage using it:

\n\n

When running publicly rather than in development, you should not use the built-in development server (flask run). The development server is provided by Werkzeug for convenience, but is not designed to be particularly efficient, stable, or secure. source

\n\n

and it performs worst by far for complex requests.

"},{"location":"adr/ADR-7_wsgi-server/#option-12-cheroot","title":"Option 1.2: Cheroot","text":"

Cheroot performs better than Flask for complex requests and better than uWSGI when under strong load. However, when under massive load, even increasing its minimum number of threads doesn't really add a lot to its stability.

\n

In addition, it seems to be less known and not among the servers that the Flask project lists. On the other hand, its memory footprint is better than uWSGI's and almost on par with Flask's, whereas its CPU footprint is on par with uWSGI and slightly better than the one of Flask.

"},{"location":"adr/ADR-7_wsgi-server/#option-12-uwsgi","title":"Option 1.2: uWSGI","text":"

uWSGI (narrowly) has the best showing for complex requests, but performs worst for strong load. However, if trying to deal with a massive load, scaling its resources allows uWSGI to significantly outperform the other options for very massive load.

\n

Its memory footprint is higher than for Cheroot and Flask, but its CPU footprint is on par with Cheroot and slightly better than Flask's.

"},{"location":"adr/ADR-7_wsgi-server/#decision","title":"Decision","text":"

We chose option 1.2 and will for now go forward with Cheroot as the WSGI server. The decision was based on the server performing best in the relevant parts of the stress and load tests.

"},{"location":"adr/ADR-8_go-transition/","title":"ADR 8: Transition to Golang","text":""},{"location":"adr/ADR-8_go-transition/#status","title":"Status","text":"

Accepted

"},{"location":"adr/ADR-8_go-transition/#context","title":"Context","text":"

Connaisseur was orignally written in Python, mostly because of a preference of language in the team. This was completely fine and worked for many years, but over the time it became more apparent that other programming languages might be better suited for the task, namely Golang. The main reasons for this are:

  • The signature schemes (Cosign, Notary, Notation) are written in Golang, which means that they can be directly used in that language. For Python, this had to be circumvented by using either a compiled version of the schemes as binaries, which bloat image size and are not as easy to use, or writing an own implementation in Python. Switching to Golang allows for better and probably faster integration of the schemes, giving a broader choice of signature providers to the community.
  • The resulting Connaisseur container will be more secure, as Golang is a compiled language, which means that the resulting binary can be run without any interpreter. This has implication on the use of base images, as Golang can use scratch images, which are more secure than the Python equivalent bringing OS and runtime.
  • Golang is THE Kubernetes language. Most of the tools in the ecosystem are written in Golang, thus the broader community is a lot more familiar with it. This will make it easier for people to contribute to Connaisseur.

This ADR discusses whether a transition to Golang is worth the effort and how it would play out.

"},{"location":"adr/ADR-8_go-transition/#considered-options","title":"Considered Options","text":""},{"location":"adr/ADR-8_go-transition/#option-1-stay-with-python","title":"Option 1: Stay with Python","text":"

No transition will be made. The Python code base is kept and continuously developed. Resources can be spend on improving the existing code base and adding new features. Adding new signature schemes will be more difficult, as they either have to be implemented in Python, or other workarounds have to be found.

"},{"location":"adr/ADR-8_go-transition/#option-2-transition-to-golang","title":"Option 2: Transition to Golang","text":"

The Python code base is abandoned and a new code base is written in Golang. This will allow for easier integration of new signature schemes and a more secure container image. It will also open up the project to the Kubernetes/Golang community, while shutting down the Python one. The transition will require a lot of work and will take some time.

We transition to Golang, which will require an entirely new code base \ud83d\ude25 This comes with all benefits mentioned above, but also with a lot of work. Additionally, the knowledge of the language in the team is rather limited at the time.

There were some efforts by @phbelitz to transition to Golang, of which the following parts are still missing (compared to the Python version):

  • Rekor support for Cosign
  • Unit tests for Notary validator
  • Integration tests
  • CICD
  • Documentation

Also none of the Golang code was yet reviewed by a second pair of eyes.

"},{"location":"adr/ADR-8_go-transition/#decision-outcome","title":"Decision Outcome","text":"

We develop a Golang version in parallel to continued support of the Python version. The Golang version should not be a breaking change to ensure we can use existing tests to keep confidence in the new version. Once the Golang version is developed, we switch it with the Python version in a feature release.

"},{"location":"adr/ADR-9_multi-pod/","title":"ADR 9: Multi Pod Architecture","text":""},{"location":"adr/ADR-9_multi-pod/#status","title":"Status","text":"

Undecided

"},{"location":"adr/ADR-9_multi-pod/#context","title":"Context","text":"

The core functionality of Connaisseur always has been centered around a standalone pod in which a web server is running and where all validation takes place. There can be multiple pods of Connaisseur, which follows the purpose of redundancy, so that Connaisseur may always be available and that load can be better balanced. Only recently, with the addition of a caching mechanism using an external Redis store, an additional pod was introduced to a core Connaisseur deployment.

The idea of this ADR is to discuss further distribution of functionalities into separate modules, away from the centralized standalone pod approach.

"},{"location":"adr/ADR-9_multi-pod/#considered-options","title":"Considered Options","text":""},{"location":"adr/ADR-9_multi-pod/#option-1-validator-pods","title":"Option 1: Validator Pods","text":""},{"location":"adr/ADR-9_multi-pod/#architecture-idea","title":"Architecture Idea","text":"

The different types of supported validators are split into their own pods, with a centralized management service that coordinates incoming requests to the right validator pods. The validator pods of the same type have their own service, so that multiple pods of the same validator can be run, in case of high load.

The management service will take over the following functionalities:

  • Read Connaisseur config
  • Run web server
  • Accept and parse admission requests
  • Image caching
  • Send image validation requests to corresponding validator service
  • Generate and send back admission response
  • Metrics
  • Send alerts

The validator pods/service will take over the following functionalities:

  • Run web server
  • Specific image validation
  • Metrics
"},{"location":"adr/ADR-9_multi-pod/#advantages","title":"Advantages","text":"
  • Users only deploy modules they actually need -> smaller footprint
  • Embraces the Kubernetes microservice architecture
  • Issues in single modules do not affect the others
  • No longer bound to a single language
  • Public interface would allow proprietary or community-maintained validators
"},{"location":"adr/ADR-9_multi-pod/#disadvantages","title":"Disadvantages","text":"
  • Maintenance of multiple images (management-image, Notary-image, Cosign-image, etc.)
  • (Potentially maintenance of multiple charts or subcharts)
  • More complexity
    • TLS between management service and validators (\ud83d\udca1: Redis solution can be reused)
    • Upgrade edge cases (how to validate a validator image, if corresponding validator does not exist?)
"},{"location":"adr/ADR-9_multi-pod/#option-2-alerting-pods","title":"Option 2: Alerting Pods","text":"

The alerting functionality is split from the main Connaisseur service, into its own. The management service will contact the alerting service, should alerts need to be sent out. The alerting service will take over the following functionalities:

  • Run web server
  • Sending alerts
  • Metrics

Similar advantages and disadvantages apply as for option 1.

"},{"location":"adr/ADR-9_multi-pod/#option-3-single-pod","title":"Option 3: Single Pod","text":"

Everything stays as is. One pod for web server+validation and one pod for caching.

"},{"location":"adr/ADR-9_multi-pod/#decision-outcome","title":"Decision Outcome","text":"
  • We like the idea, but it's a huge change with significant attached effort
  • Let's consider a PoC
"},{"location":"features/","title":"Overview","text":"

Besides Connaisseur's central functionality, several additional features are available, such as:

  • Alerting: send alerts based on verification result
  • Detection Mode: warn but do not block invalid images
  • Namespaced Validation: restrict validation to dedicated namespaces
  • Resource Validation Mode: warn but do not block invalid images of particular resource types
  • Caching: boost performance via caching of image digests

In combination, these features help to improve usability and might better support the DevOps workflow. Switching Connaisseur to detection mode and alerting on non-compliant images can for example avoid service interruptions while still benefitting from improved supply-chain security.

Feel free to propose new features that would make Connaisseur an even better experience

"},{"location":"features/alerting/","title":"Alerting","text":"

Connaisseur can send notifications on admission decisions to basically every REST endpoint that accepts JSON payloads.

"},{"location":"features/alerting/#supported-interfaces","title":"Supported interfaces","text":"

Slack, Opsgenie, Keybase and Microsoft Teams have pre-configured payloads that are ready to use. Additionally, there is a template matching the Elastic Common Schema in version 1.12. However, you can use the existing payload templates as an example how to model your own custom one. It is also possible to configure multiple interfaces for receiving alerts at the same time.

"},{"location":"features/alerting/#configuration-options","title":"Configuration options","text":"

Currently, Connaisseur supports alerting on either admittance of images, denial of images or both. These event categories can be configured independently of each other under the relevant category (i.e. admitRequest or rejectRequest):

Key Accepted values Default Required Description alerting.clusterIdentifier string \"not specified\" - Cluster identifier used in alert payload to distinguish between alerts from different clusters. alerting.<category>.receivers.[].template opsgenie, slack, keybase, msteams, ecs-1-12-0 or custom* - File in helm/alert_payload_templates/ to be used as alert payload template. alerting.<category>.receivers.[].receiverUrl string - URL of alert-receiving endpoint. alerting.<category>.receivers.[].priority int 3 - Priority of alert (to enable fitting Connaisseur alerts into alerts from other sources). alerting.<category>.receivers.[].customHeaders list[string] - - Additional headers required by alert-receiving endpoint. alerting.<category>.receivers.[].payloadFields subyaml - - Additional (yaml) key-value pairs to be appended to alert payload (as json). alerting.<category>.receivers.[].failIfAlertSendingFails bool false - Whether to make Connaisseur deny images if the corresponding alert cannot be successfully sent.

*basename of the custom template file in helm/alerting_payload_templates without file extension

Notes:

  • The value for template needs to match an existing file of the pattern charts/connaisseur/alert_payload_templates/<template>.json; so if you want to use a predefined one it needs to be one of slack, keybase, opsgenie, msteams or ecs-1-12-0.
  • For Opsgenie you need to configure an additional [\"Authorization: GenieKey <Your-Genie-Key>\"] header.
  • For Elastic Common Schema 1.12.0 output, the receiverUrl has to be an HTTP/S log ingester, such as Fluentd HTTP input or Logstash HTTP input. Also customHeaders needs to be set to [\"Content-Type: application/json\"] for Fluentd HTTP endpoints.
  • failIfAlertSendingFails only comes into play for requests that Connaisseur would have admitted as other requests would have been denied in the first place. The setting can come handy if you want to run Connaisseur in detection mode but still make sure that you get notified about what is going on in your cluster. However, this setting will significantly impact cluster interaction for everyone (i.e. block any cluster change associated to an image) if the alert sending fails permanently, e.g. accidental deletion of your Slack Webhook App, GenieKey expired...
"},{"location":"features/alerting/#example","title":"Example","text":"

For example, if you would like to receive notifications in Keybase whenever Connaisseur admits a request to your cluster, your alerting configuration would look similar to the following snippet:

charts/connaisseur/values.yaml
alerting:\n  admitRequest:\n    receivers:\n      - template: keybase\n        receiverUrl: https://bots.keybase.io/webhookbot/<Your-Keybase-Hook-Token>\n
"},{"location":"features/alerting/#additional-notes","title":"Additional notes","text":""},{"location":"features/alerting/#creating-a-custom-template","title":"Creating a custom template","text":"

Along the lines of the templates that already exist you can easily define custom templates for other endpoints. The following variables can be rendered during runtime into the payload:

  • alert_message
  • priority
  • connaisseur_pod_id
  • namespace
  • cluster
  • timestamp
  • request_id
  • images

Referring to any of these variables in the templates works by Jinja2 notation (e.g. {{ timestamp }}). You can update your payload dynamically by adding payload fields in yaml representation in the payloadFields key which will be translated to JSON by Helm as is. If your REST endpoint requires particular headers, you can specify them as described above in customHeaders.

Payload fields in action

With payload fields, you can extend the same template depending on the receiver. For example, below Connaisseur's default Opsgenie template is used to send alerts that will be assigned to different users depending on whether the alert is for a successful admission or not. The payloadFields entries will be transformed to their JSON equivalents overwrite the respective entries of the template.

charts/connaisseur/values.yaml
alerting:\n  admitRequest:\n    receivers:\n      - template: opsgenie\n        receiverUrl: https://api.eu.opsgenie.com/v2/alerts\n        priority: 4\n        customHeaders: [\"Authorization: GenieKey ABC-DEF\"]\n        payloadFields:\n          responders:\n            - username: \"someone@testcompany.de\"\n              type: user\n  rejectRequest:\n    receivers:\n      - template: opsgenie\n        receiverUrl: https://api.eu.opsgenie.com/v2/alerts\n        priority: 4\n        customHeaders: [\"Authorization: GenieKey ABC-DEF\"]\n        payloadFields:\n          responders:\n            - username: \"cert@testcompany.de\"\n              type: user\n

The resulting payload sent to the webhook endpoint will then contain the field content:

{\n    ...\n    \"responders\": [{\"type\": \"user\", \"username\": \"cert@testcompany.de\"}],\n    ...\n}\n

Feel free to make a PR to share with the community if you add new neat templates for other third parties

"},{"location":"features/automatic_child_approval/","title":"Automatic Child Approval","text":"

Per default, Connaisseur uses automatic child approval by which the child of a Kubernetes resource is automatically admitted without re-verification of the signature in order to avoid duplicate validation and handle inconsistencies with the image policy. This behavior can be configured or even disabled.

When automatic child approval is enabled, images that are deployed as part of already deployed objects (e.g. a Pod deployed as a child of a Deployment) are already validated and potentially mutated during admission of the parent. In consequence, the images of child resources are directly admitted without re-verification of the signature. This is done as the parent (and thus the child) has already been validated and might have been mutated, which would lead to duplicate validation and could cause image policy pattern mismatch. For example, given a Deployment which contains Pods with image:tag that gets mutated to contain Pods with image@sha256:digest. Then a) the Pod would not need another validation as the image was validated during the admittance of the Deployment and b) if there exists a specific rule with pattern image:tag and another less specific rule with image*, then after mutating the Deployment, the Pod would be falsely validated against image* instead of image:tag. To ensure the child resource is legit in this case, the parent resource is requested via the Kubernetes API and only those images it lists are accepted.

When automatic child approval is disabled, Connaisseur only validates and potentially mutates Pod resources.

There is trade-offs between the two behaviors: With automatic child approval, Connaisseur only verifies that the image reference in a child resource is the same as in the parent. This means that resources deployed prior to Connaisseur will never be validated until they are re-deployed even if a corresponding Pod is restarted. Consequently, a restarting Pod with an expired signature would still be admitted. However, this avoids unexpected failures when restarting Pods, avoids inconsistencies with the image policy and reduces the number of validations and thus the load. Furthermore, disabling automatic child approval also means that deployments with invalid images will be successful even though the Pods are denied.

The extension of the feature (disabling, caching) is currently under development to improve security without compromising on usability.

"},{"location":"features/automatic_child_approval/#configuration-options","title":"Configuration options","text":"

automaticChildApproval in charts/connaisseur/values.yaml under application.features supports the following values:

Key Default Required Description automaticChildApproval true - true or false; when false, Connaisseur will disable automatic child approval"},{"location":"features/automatic_child_approval/#example","title":"Example","text":"

In charts/connaisseur/values.yaml:

application:\n  features:\n    automaticChildApproval: true\n
"},{"location":"features/automatic_child_approval/#additional-notes","title":"Additional notes","text":""},{"location":"features/automatic_child_approval/#caching","title":"Caching","text":"

Connaisseur implements a caching mechanism, which allows bypassing verification for images that were already admitted recently. One might think that this obviates the need for automatic child approval. However, since an image may be mutated during verification, i.e. a tag being replaced with a digest, the child resource image to be validated could be different from the original one and in that case could be governed by a different policy pattern that explicitly denies the specific digest in which case caching would change the outcome, if we cached the validation result for both original and mutated image. As such, caching cannot replace automatic child approval with regards to skipping validations even though they both admit workload objects with images that were \"already admitted\".

"},{"location":"features/automatic_child_approval/#pod-only-validation","title":"Pod-only validation","text":"

If the resource validation mode is set to only validate pods, while automatic child approval is enabled, then the combination becomes an allow-all validator with regards to all workloads except for individual pods. As this is unlikely to be desired, we pretend automatic child approval were disabled if it is enabled in conjunction with a pod-only resource validation mode.

"},{"location":"features/automatic_unchanged_approval/","title":"Automatic Unchanged Approval","text":"

With the feature automatic unchanged approval enabled, Connaisseur automatically approves any resource that is updated and doesn't change its image references. This is especially useful when handling long living resources, with potentially out-of-sync signature data, that still need to be scaled up and down.

An example: When dealing with a deployment that has an image reference image:tag, this reference is updated by Connaisseur during signature validation to image@sha256:123..., to ensure the correct image is used by the deployment. When scaling up or down the deployment, the image reference image@sha256:123... is presented to Connaisseur, due to the updated definition. Over time the signature of the original image:tag may change and a new \"correct\" image is available at image@sha256:456.... If afterwards the deployment in scaled up or down, Connaisseur will try to validate the image reference image@sha256:123..., by looking for it inside the signature data it receives. Unfortunately this reference may no longer be present, due to signature updates and thus the whole scaling will be denied.

With automatic unchanged approval enabled, this is no longer the case. The validation of image@sha256:123... will be skipped, as no different image is used.

This obviously has security implications, since it's no longer guaranteed that resources that are updated, have fresh and up-to-date signatures. So use it with caution. For that reason the feature is also disabled by default. The creation of resources on the other hand remains unchanged and will enforce validation.

"},{"location":"features/automatic_unchanged_approval/#configuration-options","title":"Configuration options","text":"

automaticUnchangedApproval in charts/connaisseur/values.yaml under application.features supports the following values:

Key Default Required Description automaticUnchangedApproval false - true or false ; when true, Connaisseur will enable automatic unchanged approval"},{"location":"features/automatic_unchanged_approval/#example","title":"Example","text":"

In charts/connaisseur/values.yaml:

application:\n  features:\n    automaticUnchangedApproval: true\n
"},{"location":"features/caching/","title":"Caching","text":"

Connaisseur utilizes Redis as a cache. For each image reference the resolved digest or validation error is cached. This drastically boosts the performance of Connaisseur compared to older non-caching variants. The expiration for keys in the cache defaults to 30 seconds, but can be tweaked. If set to 0, no caching will be performed and the cache will not be deployed as part of Connaisseur.

"},{"location":"features/caching/#configuration-options","title":"Configuration options","text":"

cache in charts/connaisseur/values.yaml under application.features supports the following configuration:

Key Default Required Description expirySeconds 30 - Number of seconds for which validation results are cached. If set to 0, the Connaisseur deployment will omit the caching infrastructure in its entirety. cacheErrors true - Whether validation failures are cached. If set to false, Connaisseur will only cache successfully validated image digests instead of also caching errors."},{"location":"features/caching/#example","title":"Example","text":"

In charts/connaisseur/values.yaml:

application:\n  features:\n    cache:\n      expirySeconds: 15\n      cacheErrors: false\n
"},{"location":"features/detection_mode/","title":"Detection Mode","text":"

A detection mode is available in order to avoid interruptions of a running cluster, to support initial rollout or for testing purposes. In detection mode, Connaisseur admits all images to the cluster, but issues a warning1 and logs an error message for images that do not comply with the policy or in case of other unexpected failures:

kubectl run unsigned --image=docker.io/securesystemsengineering/testimage:unsigned\n> Warning: Unable to find signed digest for image docker.io/securesystemsengineering/testimage:unsigned. (not denied due to DETECTION_MODE)\n> pod/unsigned created\n

To activate the detection mode, set the detectionMode flag to true in charts/connaisseur/values.yaml.

"},{"location":"features/detection_mode/#configuration-options","title":"Configuration options","text":"

detectionMode in charts/connaisseur/values.yaml under application.features supports the following values:

Key Default Required Description detectionMode false - true or false; when detection mode is enabled, Connaisseur will warn but not deny requests with untrusted images."},{"location":"features/detection_mode/#example","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  features:\n    detectionMode: true\n
"},{"location":"features/detection_mode/#additional-notes","title":"Additional notes","text":""},{"location":"features/detection_mode/#failure-policy-vs-detection-mode","title":"Failure policy vs. detection mode","text":"

The detection mode is not to be confused with the failure policy (kubernetes.webhook.failurePolicy in charts/connaisseur/values.yaml) for the mutating admission controller: In detection mode, Conaisseur service admits all requests to the cluster independent of the validation result while the failure policy only takes effect when the service itself becomes unavailable. As such, both options are disjoint. While in default configuration, requests will be denied if either no valid image signature exists or the Connaisseur service is unavailable, setting failurePolicy to Ignore and detectionMode to true ensures that Connaisseur never blocks a request.

  1. The feature to send warnings to API clients as shown above was only introduced in Kubernetes v1.19. However, warnings are only surfaced by kubectl in stderr to improve usability. Except for testing purposes, the respective error messages should either be handled via the cluster's log monitoring solution or by making use of Connaisseur's alerting feature.\u00a0\u21a9

"},{"location":"features/metrics/","title":"Metrics","text":"

Connaisseur exposes metrics about usage of the /mutate endpoint and general information about the python process using Prometheus Flask Exporter through the /metrics endpoint.

This for example allows visualizing the number of allowed or denied resource requests.

"},{"location":"features/metrics/#example","title":"Example","text":"
# HELP python_gc_objects_collected_total Objects collected during gc\n# TYPE python_gc_objects_collected_total counter\npython_gc_objects_collected_total{generation=\"0\"} 4422.0\npython_gc_objects_collected_total{generation=\"1\"} 1866.0\npython_gc_objects_collected_total{generation=\"2\"} 0.0\n# HELP python_gc_objects_uncollectable_total Uncollectable object found during GC\n# TYPE python_gc_objects_uncollectable_total counter\npython_gc_objects_uncollectable_total{generation=\"0\"} 0.0\npython_gc_objects_uncollectable_total{generation=\"1\"} 0.0\npython_gc_objects_uncollectable_total{generation=\"2\"} 0.0\n# HELP python_gc_collections_total Number of times this generation was collected\n# TYPE python_gc_collections_total counter\npython_gc_collections_total{generation=\"0\"} 163.0\npython_gc_collections_total{generation=\"1\"} 14.0\npython_gc_collections_total{generation=\"2\"} 1.0\n# HELP python_info Python platform information\n# TYPE python_info gauge\npython_info{implementation=\"CPython\",major=\"3\",minor=\"10\",patchlevel=\"2\",version=\"3.10.2\"} 1.0\n# HELP process_virtual_memory_bytes Virtual memory size in bytes.\n# TYPE process_virtual_memory_bytes gauge\nprocess_virtual_memory_bytes 6.1161472e+07\n# HELP process_resident_memory_bytes Resident memory size in bytes.\n# TYPE process_resident_memory_bytes gauge\nprocess_resident_memory_bytes 4.595712e+07\n# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.\n# TYPE process_start_time_seconds gauge\nprocess_start_time_seconds 1.6436681112e+09\n# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.\n# TYPE process_cpu_seconds_total counter\nprocess_cpu_seconds_total 3.3\n# HELP process_open_fds Number of open file descriptors.\n# TYPE process_open_fds gauge\nprocess_open_fds 12.0\n# HELP process_max_fds Maximum number of open file descriptors.\n# TYPE process_max_fds gauge\nprocess_max_fds 1.048576e+06\n# HELP exporter_info Information about the Prometheus Flask exporter\n# TYPE exporter_info gauge\nexporter_info{version=\"0.18.7\"} 1.0\n# HELP http_request_duration_seconds Flask HTTP request duration in seconds\n# TYPE http_request_duration_seconds histogram\nhttp_request_duration_seconds_bucket{le=\"0.1\",method=\"POST\",path=\"/mutate\",status=\"200\"} 5.0\nhttp_request_duration_seconds_bucket{le=\"0.25\",method=\"POST\",path=\"/mutate\",status=\"200\"} 5.0\nhttp_request_duration_seconds_bucket{le=\"0.5\",method=\"POST\",path=\"/mutate\",status=\"200\"} 5.0\nhttp_request_duration_seconds_bucket{le=\"0.75\",method=\"POST\",path=\"/mutate\",status=\"200\"} 8.0\nhttp_request_duration_seconds_bucket{le=\"1.0\",method=\"POST\",path=\"/mutate\",status=\"200\"} 8.0\nhttp_request_duration_seconds_bucket{le=\"2.5\",method=\"POST\",path=\"/mutate\",status=\"200\"} 9.0\nhttp_request_duration_seconds_bucket{le=\"+Inf\",method=\"POST\",path=\"/mutate\",status=\"200\"} 9.0\nhttp_request_duration_seconds_count{method=\"POST\",path=\"/mutate\",status=\"200\"} 9.0\nhttp_request_duration_seconds_sum{method=\"POST\",path=\"/mutate\",status=\"200\"} 3.6445974350208417\n# HELP http_request_duration_seconds_created Flask HTTP request duration in seconds\n# TYPE http_request_duration_seconds_created gauge\nhttp_request_duration_seconds_created{method=\"POST\",path=\"/mutate\",status=\"200\"} 1.643668194758098e+09\n# HELP http_request_total Total number of HTTP requests\n# TYPE http_request_total counter\nhttp_request_total{method=\"POST\",status=\"200\"} 9.0\n# HELP http_request_created Total number of HTTP requests\n# TYPE http_request_created gauge\nhttp_request_created{method=\"POST\",status=\"200\"} 1.6436681947581613e+09\n# HELP http_request_exceptions_total Total number of HTTP requests which resulted in an exception\n# TYPE http_request_exceptions_total counter\n# HELP mutate_requests_total Total number of mutate requests\n# TYPE mutate_requests_total counter\nmutate_requests_total{allowed=\"False\",status_code=\"403\",warnings=\"False\"} 4.0\nmutate_requests_total{allowed=\"True\",status_code=\"202\",warnings=\"False\"} 5.0\n# HELP mutate_requests_created Total number of mutate requests\n# TYPE mutate_requests_created gauge\nmutate_requests_created{allowed=\"False\",status_code=\"403\"} 1.643760946491879e+09\nmutate_requests_created{allowed=\"True\",status_code=\"202\"} 1.6437609592007663e+09\n
"},{"location":"features/namespaced_validation/","title":"Namespaced Validation","text":"

Warning

Enabling namespaced validation, allows roles with edit permissions on namespaces to disable validation for those namespaces.

Namespaced validation allows restricting validation to specific namespaces. Connaisseur will only verify trust of images deployed to the configured namespaces. This can greatly support initial rollout by stepwise extending the validated namespaces or excluding specific namespaces for which signatures are unfeasible.

Namespaced validation offers two modes:

  • ignore: ignore all namespaces with label securesystemsengineering.connaisseur/webhook: ignore
  • validate: only validate namespaces with label securesystemsengineering.connaisseur/webhook: validate

The desired namespaces must be labelled accordingly, e.g. via:

# either\nkubectl namespaces <namespace> securesystemsengineering.connaisseur/webhook=ignore\n# or\nkubectl namespaces <namespace> securesystemsengineering.connaisseur/webhook=validate\n

Configure namespaced validation via the namespacedValidation in charts/connaisseur/values.yaml under application.features.

"},{"location":"features/namespaced_validation/#configuration-options","title":"Configuration options","text":"

namespacedValidation in charts/connaisseur/values.yaml supports the following keys:

Key Default Required Description mode - - ignore or validate; configure mode of exclusion to either ignore all namespaces with label securesystemsengineering.connaisseur/webhook set to ignore or only validate namespaces with the label set to validate.

If the namespacedValidation key is not set, all namespaces are validated.

"},{"location":"features/namespaced_validation/#example","title":"Example","text":"

In charts/connaisseur/values.yaml:

application:\n  features:\n    namespacedValidation:\n      mode: validate\n

Labelling target namespace to be validated:

kubectl namespaces validateme securesystemsengineering.connaisseur/webhook=validate\n
"},{"location":"features/resource_validation_mode/","title":"Resource Validation Mode","text":"

Resource Validation Mode controls the admission behavior of Connaisseur, blocking only resources that match the configured type.

"},{"location":"features/resource_validation_mode/#configurations-options","title":"Configurations Options","text":"

Resource Validation Mode can take two different values:

  • all: all Kubernetes resources which feature image references, such as Pods, ReplicaSets or CronJobs will be blocked in case validation fails and mutated if it succeeds
  • podsOnly: only Pods will be blocked in case validation fails or mutated if it succeeds. All other resources won't be blocked or mutated. On failure, a warning will be displayed instead

Configure resource validation mode via the resourceValidationMode in charts/connaisseur/values.yaml under application.features.

The resourceValidationMode value defaults to all.

"},{"location":"validators/","title":"Overview","text":"

Connaisseur is built to be extendable and currently aims to support the following signing solutions:

  • Docker Content Trust (DCT) / Notary V1
  • sigstore / Cosign
  • Notary V2 (PLANNED)

Feel free to use any or a combination of all solutions. The integration with Connaisseur is detailed on the following pages. For advantages and disadvantages of each solution, please refer to the respective docs.

"},{"location":"validators/notaryv1/","title":"Notary (V1) / DCT","text":"

Notary (V11) works as an external service holding signatures and trust data of artifacts based on The Update Framework (TUF). Docker Content Trust (DCT) is a client implementation by Docker to manage such trust data for container images like signing images or verifying the corresponding signatures. It is part of the standard Docker CLI (docker) and for example provides the docker trust commands.

Using DCT, the trust data is per default pushed to the Notary server associated to the container registry. However, not every public container registry provides an associated Notary server and thus support for DCT must be checked for the provider in question. Docker Hub for example, runs an associated Notary server (notary.docker.io) and even uses it to serve trust data for the Docker Official Images. In fact, since Connaisseur's pre-built images are shared via the Connaisseur Docker Hub repository, its own trust data is maintained on Docker Hub's Notary server. Besides the public Notary instances, Notary can also be run as a private or even standalone instance. Harbor for example comes along with an associated Notary instance.

Validating a container image via DCT requires a repository's public root key as well as fetching the repository's trust data from the associated Notary server. While DCT relies on trust on first use (TOFU) for repositories' public root keys, Connaisseur enforces manual pinning to a public root key that must be configured in advance.

"},{"location":"validators/notaryv1/#basic-usage","title":"Basic usage","text":"

In order to validate signatures using Notary, you will either need to create signing keys and signed images yourself or extract the public root key of other images and configure Connaisseur via application.validators[*].trustRoots[*].key in charts/connaisseur/values.yaml to pin trust to those keys. Both is described below. However, there is also step-by-step instructions for using Notary in the getting started guide.

"},{"location":"validators/notaryv1/#creating-signing-key-pairs","title":"Creating signing key pairs","text":"

You can either create the root key manually or push an image with DCT enabled upon which docker will guide you to set up the keys as described in the next section. In order to generate a public-private root key pair manually, you can use:

docker trust key generate root\n

You will be prompted for a password, the private key is automatically imported and a root.pub file is created in your current folder that contains your public key which should look similar to:

-----BEGIN PUBLIC KEY-----\nrole: root\n\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAELDzXwqie/P66R3gVpFDWMhxOyol5\nYWD/KWnAaEIcJVTYUR+21NJSZz0yL7KLGrv50H9kHai5WWVsVykOZNoZYQ==\n-----END PUBLIC KEY-----\n

You will only need the actual base64 encoded part for configuring the application.validators[*].trustRoots[*].key in charts/connaisseur/values.yaml of Connaisseur to validate your images. How to extract the public root key for any image is described below.

"},{"location":"validators/notaryv1/#creating-signatures","title":"Creating signatures","text":"

Before you can start validating images using the Notary (V1) validator, you'll first need an image which has been signed using DCT. Easiest way to do this is by pushing an image of your choice (e.g. busybox:stable) to your Docker Hub repository with DCT activated (either set the environment variable DOCKER_CONTENT_TRUST=1 or use the --disable-content-trust=false flag). If you haven't created any signatures for images in the current repository yet, you'll be asked to enter a passphrase for a root key and targets key, which get generated on your machine. Have a look into the TUF documentation to read more about TUF roles and their meanings. If you already have these keys, just enter the required passphrase.

DOCKER_CONTENT_TRUST=1 docker push <your-repo>/busybox:stable\n
Output
The push refers to repository [<your-repo>/busybox]\n5b8c72934dfc: Pushed\nstable: digest: sha256:dca71257cd2e72840a21f0323234bb2e33fea6d949fa0f21c5102146f583486b size: 527\nSigning and pushing trust metadata\nYou are about to create a new root signing key passphrase. This passphrase\nwill be used to protect the most sensitive key in your signing system. Please\nchoose a long, complex passphrase and be careful to keep the password and the\nkey file itself secure and backed up. It is highly recommended that you use a\npassword manager to generate the passphrase and keep it safe. There will be no\nway to recover this key. You can find the key in your config directory.\nEnter passphrase for new root key with ID 5fb3e1e:\nRepeat passphrase for new root key with ID 5fb3e1e:\nEnter passphrase for new repository key with ID 6c2a04c:\nRepeat passphrase for new repository key with ID 6c2a04c:\nFinished initializing \"<your-repo>/busybox\"\n

The freshly generated keys are directly imported to the Docker client. Private keys reside in ~/.docker/trust/private and public trust data is added to ~/.docker/trust/tuf/. The created signature for your image is pushed to the public Docker Hub Notary (notary.docker.io). The private keys and password are required whenever a new version of the image is pushed with DCT activated.

"},{"location":"validators/notaryv1/#getting-the-public-root-key","title":"Getting the public root key","text":"

Signature validation via Connaisseur requires the public root key to verify against as a trust anchor. But from where do you get this, especially for public images whose signatures you didn't create? We have created the get_root_key utility to extract the public root key of images. To use it, either use our pre-built image or build the docker image yourself via docker build -t get-public-root-key -f docker/Dockerfile.getRoot . and run it on the image to be verified:

# pre-built\ndocker run --rm docker.io/securesystemsengineering/get-public-root-key -i securesystemsengineering/testimage\n# or self-built\ndocker run --rm get-public-root-key -i securesystemsengineering/testimage\n
Output
KeyID: 76d211ff8d2317d78ee597dbc43888599d691dbfd073b8226512f0e9848f2508\nKey: -----BEGIN PUBLIC KEY-----\nMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEsx28WV7BsQfnHF1kZmpdCTTLJaWe\nd0CA+JOi8H4REuBaWSZ5zPDe468WuOJ6f71E7WFg3CVEVYHuoZt2UYbN/Q==\n-----END PUBLIC KEY-----\n

The -i (--image) option is required and takes the image, for which you want the public key. There is also the -s (--server) option, which defines the Notary server that should be used and which defaults to notary.docker.io.

The public repository root key resides with the signature data in the Notary instance, so what the get_root_key utility does in the background is just fetching, locating and parsing the public repository root key for the given image.

"},{"location":"validators/notaryv1/#configuring-and-running-connaisseur","title":"Configuring and running Connaisseur","text":"

Now that you either created your own keys and signed images or extracted the public key of other images, you will need to configure Connaisseur to use those keys for validation. This is done via application.validators in charts/connaisseur/values.yaml. The corresponding entry should look similar to the following (using the extracted public key as trust root):

charts/connaisseur/values.yaml
- name: customvalidator\n  type: notaryv1\n  host: notary.docker.io\n  trustRoots:\n  - name: default\n    key: |  # THE DESIRED PUBLIC KEY BELOW\n      -----BEGIN PUBLIC KEY-----\n      MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEOXYta5TgdCwXTCnLU09W5T4M4r9f\n      QQrqJuADP6U7g5r9ICgPSmZuRHP/1AYUfOQW3baveKsT969EfELKj1lfCA==\n      -----END PUBLIC KEY-----\n

You also need to create a corresponding entry in the image policy via application.policy, for example:

charts/connaisseur/values.yaml
- pattern: \"docker.io/<REPOSITORY>/<IMAGE>:*\"  # THE DESIRED REPOSITORY\n  validator: customvalidator\n

After installation, you are ready to verify your images against your public key:

helm install connaisseur helm --atomic --create-namespace --namespace connaisseur\n

Connaisseur now rejects all images from the given repository that have not been signed based on the provided public key. A quick guide for installation and testing is available in getting started. It also provides a full step-by-step guide.

"},{"location":"validators/notaryv1/#understanding-validation","title":"Understanding validation","text":"

Using the simple pre-configuration shipped with Connaisseur, it is possible to test validation by deploying some pods:

kubectl run test-signed --image=docker.io/securesystemsengineering/testimage:signed\n> pod/test-signed created\n\nkubectl run test-unsigned --image=docker.io/securesystemsengineering/testimage:unsigned\n> Error from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request: Unable to find signed digest for image docker.io/securesystemsengineering/testimage:unsigned.\n# or in case of a signature with a different key\n> Error from server: admission webhook \"connaisseur-svc.connaisseur.svc\" denied the request: Failed to verify signature of trust data root.\n

How does Connaisseur validate these requests and convert the images with tags to digests? What happens in the background is that Connaisseur looks up trust data of the image in the root, snapshot, timestamp and targets files (in json format) by querying the API of the Notary server. Trust data syntax is validated against their known schemas and the files' signatures are validated against their respective public keys. The pinned root key is used for the root.json file that in turn contains the other keys which can then be trusted for validation of the remaining trust data (snapshot.json, timestamp.json, targets.json). Furthermore, Connaisseur gathers trust data of potential delegations linked in the targets file which can then be used to enforce delegations.

At this point, Connaisseur is left with a validated set of trust data. Connaisseur filters the trust data for consistent signed digests that actually relate to the image under validation. In case exactly one trusted digest remains, Connaisseur modifies the admission request and admits it. Otherwise, admission is rejected.

While it is obvious to reject an image that does not exhibit a trusted digest, there is the special case of multiple trusted digests. This only occurs in some edge cases, but at this point Connaisseur cannot identify the right digest anymore and consequently has to reject.

For more information on TUF roles, please refer to TUF's documentation or checkout this introductory presentation on how the trust data formats work and are validated by Connaisseur.

"},{"location":"validators/notaryv1/#configuration-options","title":"Configuration options","text":"

.application.validators[*] in charts/connaisseur/values.yaml supports the following keys for Notary (V1) (refer to basics for more information on default keys):

Key Default Required Description name - See basics. type - notaryv1; the validator type must be set to notaryv1. host - URL of the Notary instance, in which the signatures reside, e.g. notary.docker.io. trustRoots[*].name - See basics. Setting the name of trust root to \"*\" implements a logical and and enables multiple signature verification under any trust root in the validator. trustRoots[*].key - See basics. TUF public root key. auth - - Authentication credentials for the Notary server in case the trust data is not public. auth.secretName - - (Preferred over username + password combination.) Name of a Kubernetes secret that must exist in Connaisseur namespace beforehand. Create a file secret.yaml containing: username: <user> password: <password> Run kubectl create secret generic <kube-secret-name> --from-file secret.yaml -n connaisseur to create the secret. auth.username - - Username to authenticate with2. auth.password - - Password or access token to authenticate with2. cert - - Self-signed certificate of the Notary instance, if used. Certificate must be supplied in .pem format.

.application.policy[*] in charts/connaisseur/values.yaml supports the following additional keys for Notary (V1) (refer to basics for more information on default keys):

Key Default Required Description with.delegations - - List of delegation names to enforce specific signers to be present. Refer to section on enforcing delegations for more information."},{"location":"validators/notaryv1/#example","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: docker_essentials\n    type: notaryv1\n    host: notary.docker.io\n    trustRoots:\n    - name: sse\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n        qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"docker.io/securesystemsengineering/connaisseur:*\"\n    validator: docker_essentials\n    with:\n      key: sse\n      delegations:\n      - belitzphilipp\n      - starkteetje\n
"},{"location":"validators/notaryv1/#additional-notes","title":"Additional notes","text":""},{"location":"validators/notaryv1/#enforcing-delegations","title":"Enforcing delegations","text":"

Notary (V1) offers the functionality to delegate trust. To better understand this feature, it's best to have a basic understanding of the TUF key hierarchy, or more specifically the purpose of the root, targets and delegation keys. If you are more interested in this topic, please read the TUF documentation.

When creating the signatures of your docker images earlier, two keys were generated -- the root key and the targets key. The root key is the root of all trust and will be used whenever a new image repository is created and needs to be signed. It's also used to rotate all other kinds of keys, thus there is usually only one root key present. The targets key is needed for new signatures on one specific image repository, hence every image repository has its own targets key. Hierarchically speaking, the targets keys are below the root key, as the root key can be used to rotate the targets keys should they get compromised.

Delegations will now go one level deeper, meaning they can be used to sign individual image repositories and only need the targets key for rotation purposes, instead of the root key. Also delegation keys are not bound to individual image repositories, so they can be re-used multiple times over different image repositories. So in a sense they can be understood as keys for individual signers.

To create a delegation key run:

docker trust key generate <key-name>\n> Generating key for <key-name>...\n> Enter passphrase for new <key-name> key with ID 9deed25:\n> Repeat passphrase for new <key-name> key with ID 9deed25:\n> Successfully generated and loaded private key. Corresponding public key available: <current-directory>/<key-name>.pub\n

This delegation key now needs to be added as a signer to a respective image repository, like the busybox example above. In doing so, you'll be asked for the targets key.

docker trust signer add --key <key-name>.pub <key-name> <your-repo>/busybox\n> Adding signer \"<key-name>\" to <your-repo>/busybox...\n> Enter passphrase for repository key with ID b0014f8:\n> Successfully added signer: <key-name> to <your-repo>/busybox\n

If you create a new signature for the image, you'll be asked for your delegation key instead of the targets key, therefore creating a signature using the delegation.

DOCKER_CONTENT_TRUST=1 docker push <your-repo>/busybox:stable\n

Without further configuration, Connaisseur will accept all delegation signatures for an image that can ultimately be validated against the public root key. Connaisseur can enforce a certain signer/delegation (or multiple) for an image's signature via the with.delegations list inside an image policy rule. Simply add the signer's name to the list. You can also add multiple signer names to the list in which case Connaisseur will enforce that all delegations must have signed a matching image.

charts/connaisseur/values.yaml
application:\n  policy:\n  - pattern: \"<your-repo>/busybox:*\"\n    with:\n      delegations:\n      - <key-name>\n      - <other-key-name>\n

The delegation feature can be useful in complex organisations where certain people may be required to sign specific critical images. Another use case is to sign an image with delegation keys in various stages of your CI and enforce that certain checks were passed, i.e. enforcing the signature of your linter, your security scanner and your software lisence compliance check.

"},{"location":"validators/notaryv1/#using-azure-container-registry","title":"Using Azure Container Registry","text":"

You need to provide credentials of an Azure Identity having at least read access to the ACR (and, thus, to the associated Notary instance). Assuming you have the az cli installed you can create a Service Principal for this by running:

# Retrieve the ID of your registry\nREGISTRY_ID=$(az acr show --name <ACR-NAME>  --query 'id' -otsv)\n\n# Create a service principal with the Reader role on your registry\naz ad sp create-for-rbac --name \"<SERVICE-PRINCIPLE-NAME>\" --role Reader --scopes ${REGISTRY_ID}\n

Use the resulting applicationID as auth.username, the resulting password as auth.password and set <ACR>.azurecr.io as host in the charts/connaisseur/values.yaml and you're ready to go!

  1. Notary does traditionally not carry the version number. However, in differentiation to the new Notary V2 project we decided to add a careful \"(V1)\" whenever we refer to the original project.\u00a0\u21a9

  2. There is no behavioral difference between configuring a Kubernetes secret or setting the credentials via username or password. In the latter case, a corresponding Kubernetes secret containing these credentials will be created automatically during deployment.\u00a0\u21a9\u21a9

"},{"location":"validators/notaryv2/","title":"Notary V2","text":"

TBD - Notary V2 has not yet been integrated with Connaisseur.

"},{"location":"validators/sigstore_cosign/","title":"sigstore / Cosign","text":"

sigstore is a Linux Foundation project that aims to provide public software signing and transparency to improve open source supply chain security. As part of the sigstore project, Cosign allows seamless container signing, verification and storage. You can read more about it here.

Connaisseur currently supports the elementary function of verifying Cosign-generated signatures based on the following types of keys:

  • Locally-generated key pair
  • KMS (via reference URI or export of the public key)
  • Hardware-based token (export the public key)

We plan to expose further features of Cosign and sigstore in upcoming releases, so stay tuned!

"},{"location":"validators/sigstore_cosign/#basic-usage","title":"Basic usage","text":"

Getting started with Cosign is very well described in the docs. You can download Cosign from its GitHub repository. In short: After installation, a keypair is generated via:

cosign generate-key-pair\n

You will be prompted to set a password, after which a private (cosign.key) and public (cosign.pub) key are created. You can then use Cosign to sign a container image using:

# Here, ${IMAGE} is REPOSITORY/IMAGE_NAME:TAG\ncosign sign --key cosign.key ${IMAGE}\n

The created signature can be verfied via:

cosign verify --key cosign.pub ${IMAGE}\n

To use Connaisseur with Cosign, configure a validator in charts/connaisseur/values.yaml with the generated public key (cosign.pub) as a trust root. The entry in .application.validators should look something like this (make sure to add your own public key to trust root default):

charts/connaisseur/values.yaml
- name: customvalidator\n  type: cosign\n  trustRoots:\n  - name: default\n    key: |  # YOUR PUBLIC KEY BELOW\n      -----BEGIN PUBLIC KEY-----\n      MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n      qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n      -----END PUBLIC KEY-----\n

In .application.policy, add a pattern to match your public key to your own repository:

charts/connaisseur/values.yaml
- pattern: \"docker.io/securesystemsengineering/testimage:co*\"  # YOUR REPOSITORY\n  validator: customvalidator\n

After installation, you are ready to verify your images against your public key:

helm install connaisseur helm --atomic --create-namespace --namespace connaisseur\n

A quick guide for installation and testing is available in getting started. In case you just use the default values for the validator and image policy given above, you are able to successfully validate our signed testimage:

kubectl run signed --image=docker.io/securesystemsengineering/testimage:co-signed\n

And compare this to the unsigned image:

kubectl run unsigned --image=docker.io/securesystemsengineering/testimage:co-unsigned\n

Or signed with a different key:

kubectl run altsigned --image=docker.io/securesystemsengineering/testimage:co-signed-alt\n
"},{"location":"validators/sigstore_cosign/#configuration-options","title":"Configuration options","text":"

.application.validators[*] in charts/connaisseur/values.yaml supports the following keys for Cosign (refer to basics for more information on default keys):

Key Default Required Description name - See basics. type - cosign; the validator type must be set to cosign. trustRoots[*].name - See basics. trustRoots[*].key - if not using keyless See basics. Public key from cosign.pub file or KMS URI. See additional notes below. trustRoots[*].keyless.issuer - if not using a key or issuerRegex The OIDC provider URL which attests the identity. trustRoots[*].keyless.subject - if not using a key or subjectRegex The identity that created the keyless signature. Usually an email address. trustRoots[*].keyless.issuerRegex - if not using a key or issuer Regex for the OIDC provider URL which attests the identity. trustRoots[*].keyless.subjectRegex - if not using a key or subject Regex of the identity that created the keyless signature. Usually an email address. When setting this, make sure you control all subject that can be matched. The pattern your.name@gmail.* also matches yourXname@gmail.com or your.name@gmail.attacker.com host.rekor rekor.sigstore.dev - Rekor URL to use for validation against the transparency log (default sigstore instance is rekor.sigstore.dev). Setting host enforces successful transparency log check to pass verification. See additional notes below. host.rekorPubkey Public key of rekor.sigstore.dev - Public key used to verify signature of log entry from Rekor. host.fulcioCert Root and intermediate certificates belonging to fulcio.sigstore.dev - The root certificate belonging the Fulcio CA which is used to create keyless signatures. host.ctLogPubkey Public key for the certificate transparency log provided by Sigstore - The public key needed for verifying Signed Certificate Timestamps (SCT). This will accept a single key. auth. - - Authentication credentials for registries with restricted access (e.g. private registries or ratelimiting). See additional notes below. auth.secretName - - Name of a Kubernetes secret in Connaisseur namespace that contains dockerconfigjson for registry authentication. See additional notes below. auth.useKeychain false - When true, pass --k8s-keychain argument to cosign verify in order to use workload identities for authentication. See additional notes below. cert - - A TLS certificate in PEM format for private registries with self-signed certificates.

.application.policy[*] in charts/connaisseur/values.yaml supports the following additional keys and modifications for sigstore/Cosign (refer to basics for more information on default keys):

Key Default Required Description with.trustRoot - Setting the name of trust root to \"*\" enables verification of multiple trust roots. Refer to section on multi-signature verification for more information. with.threshold - - Minimum number of signatures required in case with.trustRoot is set to \"*\". Refer to section on multi-signature verification for more information. with.required [] - Array of required trust roots referenced by name in case with.trustRoot is set to \"*\". Refer to section on multi-signature verification for more information. with.verifyInTransparencyLog true - Whether to include the verification using the Rekor tranparency log in the verification process. Refer to Tranparency log verification for more information. with.verifySCT true - Whether to verify the signed certificate timestamps inside the transparency log."},{"location":"validators/sigstore_cosign/#example","title":"Example","text":"charts/connaisseur/values.yaml charts/connaisseur/values.yaml
application:\n  validators:\n  - name: myvalidator\n    type: cosign\n    trustRoots:\n    - name: mykey\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEvtc/qpHtx7iUUj+rRHR99a8mnGni\n        qiGkmUb9YpWWTS4YwlvwdmMDiGzcsHiDOYz6f88u2hCRF5GUCvyiZAKrsA==\n        -----END PUBLIC KEY-----\n\n  policy:\n  - pattern: \"docker.io/securesystemsengineering/testimage:co-*\"\n    validator: myvalidator\n    with:\n      key: mykey\n
"},{"location":"validators/sigstore_cosign/#additional-notes","title":"Additional notes","text":""},{"location":"validators/sigstore_cosign/#authentication","title":"Authentication","text":"

When using a private registry for images and signature data, the credentials need to be provided to Connaisseur. There are two ways to do this.

"},{"location":"validators/sigstore_cosign/#dockerconfigjson","title":"dockerconfigjson","text":"

Create a dockerconfigjson Kubernetes secret in the Connaisseur namespace and pass the secret name to Connaisseur as auth.secretName. The secret can for example be created directly from your local config.json (for docker this resides in ~/.docker/config.json):

kubectl create secret generic my-secret \\\n  --from-file=.dockerconfigjson=path/to/config.json \\\n  --type=kubernetes.io/dockerconfigjson \\\n  -n connaisseur\n

The secret can also be generated directly from supplied credentials (which may differ from your local config.json, using:

kubectl create secret docker-registry my-secret \\\n  --docker-server=https://index.docker.io/v1/ \\\n  --docker-username='<your username>' \\\n  --docker-password='<your password>' \\\n  -n connaisseur\n

Info

At present, it seems to be necessary to suffix your registry server URL with /v1/. This may become unnecessary in the future.

In the above cases, the secret name in Connaisseur configuration would be secretName: my-secret. It is possible to provide one Kubernetes secret with a config.json for authentication to multiple private registries and referencing this in multiple validators.

"},{"location":"validators/sigstore_cosign/#k8s-keychain","title":"K8s keychain","text":"

Specification of auth.useKeychain: true in the validator configuration passes the --k8s-keychain to cosign when performing image validation. Thus, k8schain is used by cosign to pick up ambient registry credentials from the environment and for example use workload identities in case of common cloud providers.

For example, when validating against an ECR private repository, the credentials of an IAM user allowed to perform actions ecr:GetAuthorizationToken, ecr:BatchGetImage, and ecr:GetDownloadUrlForLayer could be added to the secret connaisseur-env-secrets:

apiVersion: v1\nkind: Secret\ntype: Opaque\nmetadata:\n  name: connaisseur-env-secrets\n  ...\ndata:\n  AWS_ACCESS_KEY_ID: ***\n  AWS_SECRET_ACCESS_KEY: ***\n  ...\n

If useKeychain is set to true in the validator configuration, cosign will log into ECR at time of validation. See this cosign pull request for more details.

"},{"location":"validators/sigstore_cosign/#kms-support","title":"KMS Support","text":"

Connaisseur supports Cosign's URI-based KMS integration to manage the signing and verification keys. Simply configure the trust root key value as the respective URI. In case of a Kubernetes secret, this would take the following form:

charts/connaisseur/values.yaml
- name: myvalidator\n  type: cosign\n  trustRoots:\n  - name: mykey\n    key: k8s://connaisseur/cosignkeys\n

For that specific case of a Kubernetes secret, make sure to place it in a suitable namespace and grant Connaisseur access to it1.

Most other KMS will require credentials for authentication that must be provided via environment variables. Such environment variables can be injected into Connaisseur via deployment.envs in charts/connaisseur/values.yaml, e.g.:

charts/connaisseur/values.yaml
  envs:\n    VAULT_ADDR: myvault.com\n    VAULT_TOKEN: secrettoken\n
"},{"location":"validators/sigstore_cosign/#multi-signature-verification","title":"Multi-signature verification","text":"

Connaisseur can verify multiple signatures for a single image. It is possible to configure a threshold number and specific set of required valid signatures. This allows to implement several advanced use cases (and policies):

  • Five maintainers of a repository are able to sign a single derived image, however at least 3 signatures are required for the image to be valid.
  • In a CI pipeline, a container image is signed directly after pushing by the build job and at a later time by passing quality gates such as security scanners or integration tests, each with their own key (trust root). Validation requires all of these signatures for deployment to enforce integrity and quality gates.
  • A mixture of the above use cases whereby several specific trust roots are enforced (e.g. automation tools) and the overall number of signatures has to surpass a certain threshold (e.g. at least one of the testers admits).
  • Key rotation is possible by adding a new key as an additional key and require at least one valid signature.

Multi-signature verification is scoped to the trust roots specified within a referenced validator. Consider the following validator configuration:

charts/connaisseur/values.yaml
application:\n  validators:\n  - name: multicosigner\n    type: cosign\n    trustRoots:\n    - name: alice\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEusIAt6EJ3YrTHdg2qkWVS0KuotWQ\n        wHDtyaXlq7Nhj8279+1u/l5pZhXJPW8PnGRRLdO5NbsuM6aT7pOcP100uw==\n        -----END PUBLIC KEY-----\n    - name: bob\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE01DasuXJ4rfzAEXsURSnbq4QzJ6o\n        EJ2amYV/CBKqEhhl8fDESxsmbdqtBiZkDV2C3znIwV16SsJlRRYO+UrrAQ==\n        -----END PUBLIC KEY-----\n    - name: charlie\n      key: |\n        -----BEGIN PUBLIC KEY-----\n        MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEEHBUYJVrH+aFYJPuryEkRyE6m0m4\n        ANj+o/oW5fLRiEiXp0kbhkpLJR1LSwKYiX5Toxe3ePcuYpcWZn8Vqe3+oA==\n        -----END PUBLIC KEY-----\n

The trust roots alice, bob, and charlie are all included for verification in case .application.policy[*].with.trustRoot is set to \"*\" (note that this is a special flag, not a real wildcard):

charts/connaisseur/values.yaml
- pattern: \"*:*\"\n  validator: multicosigner\n  with:\n    trustRoot: \"*\"\n

As neither threshold nor required are specified, Connaisseur will require signatures of all trust roots (alice, bob, and charlie) and deny an image otherwise. If either threshold or required is specified, it takes precedence. For example, it is possible to configure a threshold number of required signatures via the threshold key:

charts/connaisseur/values.yaml
- pattern: \"*:*\"\n  validator: multicosigner\n  with:\n    trustRoot: \"*\"\n    threshold: 2\n

In this case, valid signatures of two or more out of the three trust roots are required for admittance. Using the required key, it is possible to enforce specific trusted roots:

charts/connaisseur/values.yaml
- pattern: \"*:*\"\n  validator: multicosigner\n  with:\n    trustRoot: \"*\"\n    required: [\"alice\", \"bob\"]\n

Now, only images with valid signatures of trust roots alice and bob are admitted. It is possible to combine threshold and required keys:

charts/connaisseur/values.yaml
- pattern: \"*:*\"\n  validator: multicosigner\n  with:\n    trustRoot: \"*\"\n    threshold: 3\n    required: [\"alice\", \"bob\"]\n

Thus, at least 3 valid signatures are required and alice and bob must be among those.

"},{"location":"validators/sigstore_cosign/#transparency-log-verification","title":"Transparency log verification","text":"

The sigstore project contains a transparency log called Rekor that provides an immutable, tamper-resistant ledger to record signed metadata to an immutable record. While it is possible to run your own instance, a public instance of Rekor is available at rekor.sigstore.dev. With Connaisseur it is possible to verify that a signature was added to the transparency log via the validators host.rekor key (see Cosign docs). When the host.rekor key is set, e.g. to rekor.sigstore.dev for the public instance, Connaisseur requires that a valid signature was added to the transparency log and deny an image otherwise. Furthermore, the host.rekor allows switching to private Rekor instances, e.g. for usage with keyless signatures. To disable this feature the with.verifyInTransparencyLog key can be set to false. This is useful for example if the signature was made without an upload to the transparency log in the first place.

"},{"location":"validators/sigstore_cosign/#keyless-signatures","title":"Keyless signatures","text":"

Keyless signatures are a feature of Sigstore that allows to sign container images without the need to manage a private key. Instead the signatures are bound to identities, attested by OIDC providers, and use ephemeral keys, short-lived certificates and a transparency log under the hood to provide similar security guarantees. Further information on this topic can be found here.

When using keyless signatures, the trustRoots[*].keyless field can be used to specify the issuer and subject of the keyless signature. The issuer is the OIDC provider that attests the identity and the subject is the identity that created the keyless signature, usually an email address. Both fields are also available as regular expressions. The following example shows how to configure a validator for keyless signatures:

charts/connaisseur/values.yaml
- name: keylessvalidator\n  type: cosign\n  trustRoots:\n  - name: keyless\n    keyless:\n      issuerRegex: \"github\"\n      subject: \"philipp.belitz@securesystems.de\n

In case the signature was created using the Sigstore infrastructure, nothing else needs to be configured since Connaisseur will automatically retrieve all needed public keys and certificates from the Sigstore infrastructure. If the signature was created using a private infrastructure, the host.fulcioCert field can be used to specify the root certificate belonging to the Fulcio CA which is used to create the keyless signatures. The host.fulcioCert field should contain the root certificate in PEM format. The same applies to the host.ctLogPubkey field which can be used to specify the public key needed for verifying Signed Certificate Timestamps (SCT) and the host.rekorPubkey field which can be used to specify the public key used to verify the signature of log entries from Rekor.

charts/connaisseur/values.yaml
name: default\ntype: cosign\nhost:\n  rekorPubkey: |\n    -----BEGIN PUBLIC KEY-----\n    ...\n    -----END PUBLIC KEY-----\n  ctLogPubkey: | \n    -----BEGIN PUBLIC KEY-----\n    ...\n    -----END PUBLIC KEY-----\n  fulcioCert: |\n    -----BEGIN CERTIFICATE-----\n    ...\n    -----END CERTIFICATE-----\n  ...\n
  1. The corresponding role and rolebinding should look similar to the following:

    apiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n  name: connaisseur-kms-role\n  namespace: connaisseur  # namespace of respective k8s secret, might have to change that\n  labels:\n    app.kubernetes.io/name: connaisseur\nrules:\n- apiGroups: [\"*\"]\n  resources: [\"secrets\"]\n  verbs: [\"get\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: connaisseur-kms-rolebinding\n  namespace: connaisseur  # namespace of respective k8s secret, might have to change that\n  labels:\n    app.kubernetes.io/name: connaisseur\nsubjects:\n- kind: ServiceAccount\n  name: connaisseur-serviceaccount\n  namespace: connaisseur  # Connaisseur's namespace, might have to change that\nroleRef:\n  kind: Role\n  name: connaisseur-kms-role\n  apiGroup: rbac.authorization.k8s.io\n
    Make sure to adjust it as needed.\u00a0\u21a9

"}]} \ No newline at end of file diff --git a/develop/validators/sigstore_cosign/index.html b/develop/validators/sigstore_cosign/index.html index a6e4ae8af..05d9d293c 100644 --- a/develop/validators/sigstore_cosign/index.html +++ b/develop/validators/sigstore_cosign/index.html @@ -537,9 +537,9 @@
  • - + - k8s_keychain + K8s keychain @@ -1418,9 +1418,9 @@
  • - + - k8s_keychain + K8s keychain @@ -1638,10 +1638,10 @@

    Configuration optionsdockerconfigjson for registry authentication. See additional notes below. -auth.k8sKeychain +auth.useKeychain false - -When true, pass --k8s-keychain argument to cosign verify in order to use workload identities for authentication. See additional notes below. +When true, pass --k8s-keychain argument to cosign verify in order to use workload identities for authentication. See additional notes below. cert @@ -1741,8 +1741,8 @@

    dockerconfigjsonk8s_keychain⚓︎

    -

    Specification of auth.k8sKeychain: true in the validator configuration passes the --k8s-keychain to cosign when performing image validation. +

    K8s keychain⚓︎

    +

    Specification of auth.useKeychain: true in the validator configuration passes the --k8s-keychain to cosign when performing image validation. Thus, k8schain is used by cosign to pick up ambient registry credentials from the environment and for example use workload identities in case of common cloud providers.

    For example, when validating against an ECR private repository, the credentials of an IAM user allowed to perform actions ecr:GetAuthorizationToken, ecr:BatchGetImage, and ecr:GetDownloadUrlForLayer could be added to the secret connaisseur-env-secrets:

    @@ -1757,7 +1757,7 @@

    k8s_keychain AWS_SECRET_ACCESS_KEY: *** ...

  • -

    If k8sKeychain is set to true in the validator configuration, cosign will log into ECR at time of validation. +

    If useKeychain is set to true in the validator configuration, cosign will log into ECR at time of validation. See this cosign pull request for more details.

    KMS Support⚓︎

    Connaisseur supports Cosign's URI-based KMS integration to manage the signing and verification keys.