This repository contains scripts and files that are used to package cert-manager for Red Hat's Operator Lifecycle Manager (OLM). This allows users of OpenShift and OperatorHub to easily install cert-manager into their clusters. It is currently an experimental deployment method.
The package is called an Operator Bundle and it is a container image that stores the Kubernetes manifests and metadata associated with an operator. A bundle is meant to represent a specific version of an operator.
The bundles are indexed in a Catalog Image which is pulled by OLM in the Kubernetes cluster.
Clients such as kubectl operator
then interact with the OLM CRDs to "subscribe" to a particular release channel.
OLM will then install the newest cert-manager bundle in that release channel and perform upgrades as newer versions are added to that release channel.
📖 Read the Operator Lifecycle Manager installation documentation for cert-manager.
In order to test that OLM can upgrade to the new version you can perform a test release, and publish a "release candidate" bundle by creating release candidate PRs to Kubernetes Community Operators Repository and to OpenShift Community Operators Repository.
Once these bundles have been merged, the release candidate version of cert-manager should be available in the "candidates" channel only.
You can test upgrading to the new version by creating a Subscription targeting the "candidate" channel (which should also contain the latest stable version), and set the "startingCSV" to the last stable version and "installPlanApproval" to "Manual". E.g.
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: cert-manager
namespace: openshift-operators
spec:
channel: candidate
installPlanApproval: Manual
name: cert-manager
source: community-operators
sourceNamespace: openshift-marketplace
startingCSV: cert-manager.v1.6.1
Then when you have published the release candidate, you should verify that you can upgrade cert-manager to the new version. Check the logs and events for upgrade errors during the upgrade.
-
Add the new version to
CERT_MANAGER_VERSION
at the top of theMakefile
-
If this is a release candidate:
- Add
-rc1
as a suffix toBUNDLE_VERSION
- Add
-
If this is the final release:
- Remove the
-rc1
suffix fromBUNDLE_VERSION
- Remove the
-
Run
make bundle-build bundle-push catalog-build catalog-push
to generate a bundle and a catalog. -
Run
make bundle-validate
to check the generated bundle files. -
git commit
the bundle changes. -
Preview the generated clusterserviceversion file on OperatorHub
-
Test the generated bundle locally (See testing below)
-
Create a PR on the Kubernetes Community Operators Repository, adding the new or updated bundle files to
operators/cert-manager/
under a sub-directory named after the bundle versionmake update-community-operators
-
Create a PR on the OpenShift Community Operators Repository, adding the new or updated bundle files to
operators/cert-manager/
under a sub-directory named after the bundle versionmake update-community-operators-prod
The bundle Docker image and a temporary catalog Docker image can be built and pushed to a personal Docker registry.
These can then be used by OLM running on a Kubernetes cluster.
Run make bundle-test
to create the bundle and catalog then deploy them with OLM, installed on a local Kind cluster, for testing.
make bundle-test
Wait for the CSV to be created:
$ kubectl -n operators get clusterserviceversion -o wide
NAME DISPLAY VERSION REPLACES PHASE
cert-manager.v1.3.1 cert-manager 1.3.1 Installing
Monitor events as OLM installs cert-manager 1.3.1
$ kubectl -n operators get events -w
LAST SEEN TYPE REASON OBJECT MESSAGE
0s Normal RequirementsUnknown clusterserviceversion/cert-manager.v1.3.1 requirements not yet checked
0s Normal RequirementsNotMet clusterserviceversion/cert-manager.v1.3.1 one or more requirements couldn't be found
0s Normal AllRequirementsMet clusterserviceversion/cert-manager.v1.3.1 all requirements found, attempting install
0s Normal AllRequirementsMet clusterserviceversion/cert-manager.v1.3.1 all requirements found, attempting install
0s Normal ScalingReplicaSet deployment/cert-manager Scaled up replica set cert-manager-74d7f9dff to 1
0s Normal SuccessfulCreate replicaset/cert-manager-74d7f9dff Created pod: cert-manager-74d7f9dff-72g4t
0s Normal Scheduled pod/cert-manager-74d7f9dff-72g4t Successfully assigned operators/cert-manager-74d7f9dff-72g4t to cert-manager-olm-control-plane
0s Normal Pulling pod/cert-manager-74d7f9dff-72g4t Pulling image "quay.io/jetstack/cert-manager-controller:v1.3.1"
0s Normal ScalingReplicaSet deployment/cert-manager-cainjector Scaled up replica set cert-manager-cainjector-bffcd79d7 to 1
0s Normal SuccessfulCreate replicaset/cert-manager-cainjector-bffcd79d7 Created pod: cert-manager-cainjector-bffcd79d7-h29qc
0s Normal Scheduled pod/cert-manager-cainjector-bffcd79d7-h29qc Successfully assigned operators/cert-manager-cainjector-bffcd79d7-h29qc to cert-manager-olm-control-plane
0s Normal Pulling pod/cert-manager-cainjector-bffcd79d7-h29qc Pulling image "quay.io/jetstack/cert-manager-cainjector:v1.3.1"
0s Normal ScalingReplicaSet deployment/cert-manager-webhook Scaled up replica set cert-manager-webhook-649f87bd5b to 1
0s Normal SuccessfulCreate replicaset/cert-manager-webhook-649f87bd5b Created pod: cert-manager-webhook-649f87bd5b-7swpk
0s Normal Scheduled pod/cert-manager-webhook-649f87bd5b-7swpk Successfully assigned operators/cert-manager-webhook-649f87bd5b-7swpk to cert-manager-olm-control-plane
0s Normal Pulling pod/cert-manager-webhook-649f87bd5b-7swpk Pulling image "quay.io/jetstack/cert-manager-webhook:v1.3.1"
0s Normal InstallSucceeded clusterserviceversion/cert-manager.v1.3.1 waiting for install components to report healthy
0s Normal InstallWaiting clusterserviceversion/cert-manager.v1.3.1 installing: waiting for deployment cert-manager to become ready: deployment "cert-manager" not available: Deployment does not have minimum availability.
Run some of the cert-manager E2E conformance tests:
$ ./devel/run-e2e.sh --ginkgo.focus '[Conformance].*SelfSigned Issuer'
...
There are a few ways to create an OpenShift cluster for testing.
Here we will describe using crc
(code-ready-containers) to install a single node local OpenShift cluster.
Alternatives are:
- Initializing Red Hat OpenShift Service on AWS using
rosa
: known to work but takes ~45min to create a multi-node OpenShift cluster. - Install OpenShift on any cloud using OpenShift Installer: did not work on GCP at time of writing due to Installer can't get managedZones while service account and gcloud cli can on GCP #5300.
crc
requires: 4 virtual CPUs (vCPUs), 9 GiB of free memory, 35 GiB of storage space
but for crc-v1.34.0, this is insufficient and you will need 8 CPUs and 32GiB,
which is more than is available on most laptops.
Download your pull secret from the crc-download page and supply the path in the command line below:
make crc-instance OPENSHIFT_VERSION=4.9 PULL_SECRET=${HOME}/Downloads/pull-secret
This will create a VM and automatically install the chosen version of OpenShift, using a suitable version of crc
.
The crc
installation, setup and start are performed by a startup-script
which is run when the VM boots.
You can monitor the progress of the script as follows:
gcloud compute instances tail-serial-port-output crc-4-9
You can log in to the VM and interact with the cluster as follows:
gcloud compute ssh crc@crc-4-9 -- -D 8080
sudo journalctl -u google-startup-scripts.service --output cat
eval $(bin/crc-1.34.0 oc-env)
oc get pods -A
Log in to the VM using SSH and enable socks proxy forwarding so that you will be able to connect to the Web UI of crc
when it starts.
gcloud compute ssh crc@crc-4-9 -- -D 8080
Now configure your web browser to use the socks5 proxy at localhost:8080
.
Also configure it to use the socks proxy for DNS requests.
With this configuration you should now be able to visit the OpenShift web console page:
https://console-openshift-console.apps-crc.testing
You will be presented with a couple of "bad SSL certificate" error pages, because the web console is using self-signed TLS certificiates. Click "Acccept and proceed anyway".
Now click the "Operators > OperatorHub" link on the left hand menu.
Search for "cert-manager" and click the "community" entry and then click "install".
Once you have installed cert-manager on the crc-instance
you can run the cert-manager E2E tests,
to verify that cert-manager has been installed properly and is reconciling Certificates.
First compile the cert-manager E2E test binary as follows:
cd projects/cert-manager/cert-manager
bazel build //test/e2e:e2e
And then upload the binary to the remote VM and run them against cert-manager installed in the crc OpenShift cluster:
cd projects/cert-manager/cert-manager-olm
make crc-e2e \
OPENSHIFT_VERSION=4.8 \
PULL_SECRET=~/Downloads/pull-secret \
E2E_TEST=../cert-manager/bazel-bin/test/e2e/e2e.test
If you can't use the automated script to create the crc
VM
you can create one manually, as follows.
Create a powerful cloud VM on which to run crc
, as follows:
GOOGLE_CLOUD_PROJECT_ID=$(gcloud config get-value project)
gcloud compute instances create crc-4-9 \
--enable-nested-virtualization \
--min-cpu-platform="Intel Haswell" \
--custom-memory 32GiB \
--custom-cpu 8 \
--image-family=rhel-8 \
--image-project=rhel-cloud \
--boot-disk-size=200GiB \
--boot-disk-type=pd-ssd
NOTE: The VM must support nested-virtualization because crc
creates another VM using libvirt
.
Now log in to the VM using SSH and enable socks proxy forwarding so that you will be able to connect to the Web UI of crc
when it starts.
gcloud compute ssh crc@crc-4-9 -- -D 8080
Download crc
and get a pull secret from the RedHat Console.
The latest version of crc
will install the latest version of OpenShift (4.9 at time of writing).
If you want to test on an older version of OpenShift you will need to download an older version of crc
which corresponds to the target OpenShift version.
Download the archive, extract it and move the crc
binary to your system path:
curl -SLO https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/1.34.0/crc-linux-amd64.tar.xz
tar xf crc-linux-amd64.tar.xz
sudo mv crc-linux-1.34.0-amd64/crc /usr/local/bin/
Run crc setup
to prepare the system for running the crc
VM:
crc setup
...
INFO Uncompressing crc_libvirt_4.9.0.crcbundle
crc.qcow2: 11.50 GiB / 11.50 GiB [---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00%
oc: 117.16 MiB / 117.16 MiB [--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00%
Your system is correctly setup for using CodeReady Containers, you can now run 'crc start' to start the OpenShift cluster
Run crc start
to create the VM and start OpenShift
(Paste in the pull secret which you can copy from the crc-download page when prompted)
crc start
...
CodeReady Containers requires a pull secret to download content from Red Hat.
You can copy it from the Pull Secret section of https://cloud.redhat.com/openshift/create/local.
? Please enter the pull secret
...
Started the OpenShift cluster.
The server is accessible via web console at:
https://console-openshift-console.apps-crc.testing
Log in as administrator:
Username: kubeadmin
Password: ******
Log in as user:
Username: developer
Password: *******
Use the 'oc' command line interface:
$ eval $(crc oc-env)
$ oc login -u developer https://api.crc.testing:6443