-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[v2] E2E test groundwork #665
Conversation
apiVersion: apps/v1 | ||
kind: Deployment | ||
metadata: | ||
name: controller-manager |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kubebuilder scaffolding.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## v2 #665 +/- ##
=====================================
Coverage ? 19.06%
=====================================
Files ? 25
Lines ? 3745
Branches ? 0
=====================================
Hits ? 714
Misses ? 2925
Partials ? 106 ☔ View full report in Codecov by Sentry. |
// Generate a random tag to ensure pods get re-created when re-running the | ||
// test locally. | ||
bytes := make([]byte, 12) | ||
_, _ = rand.Read(bytes) //nolint:staticcheck // Don't need crypto here. | ||
tag := hex.EncodeToString(bytes) | ||
projectimage := "pulumi/pulumi-kubernetes-operator-v2:" + tag |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you suppose this is handled ordinarily? It is perhaps assumed that the build version would vary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using a latest
tag with an IfNotPresent
pull policy is sufficient for CI. I was having trouble running things multiple times locally because a new latest tag wouldn't cause things to re-deploy but I can probably simplify this with some more robust teardown logic.
// TODO: get from configuration | ||
workspaceAgentImage := "pulumi/pulumi-kubernetes-operator-v2:" + version.Version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this should be read from an environment variable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to hook this up to the downward API.
This adds the groundwork for e2e testing the v2 branch. Try it with
make test-e2e
.operator/config
, so I regenerated CRDs intooperator/config/bases
and dropped a Deployment intooperator/config/manager
. We'll eventually need to consolidate this with the rootconfig/crd
.protoc-gen-go-grpc
which changed the generated code slightly. I checked in a.mise.toml
mostly just to write down which versions of each tool I was using while working on this. Totally optional but you can read docs here if you'd like a direnv-style tooling setup.I haven't enabled an assertion for the random-yaml stack because I wasn't able to see it reliably reconcile. I was seeing GitHub rate limit errors as well as errors from the operator when updating status on the stack.