-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TestMachineSetReconciler is flaky #11722
Comments
/help |
@sbueringer: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/triage accepted Probably we just have to use a separate scheme for TestMachineSetReconciler_syncReplicas_WithErrors |
I will work on it if no one started yet. |
/assign @Karthik-K-N |
Even after using -race flag not able to reproduce locally, Any tips? |
Not sure, probably just not happening that often or only with CPU starvation. I think in general it's fine to just see if we can use a separate scheme for this unit test (+ then check via periodics in Prow over time if the flaky test goes away) (you can check some other tests / places how we create a fake client with a scheme) |
Got the point, will do thanks. |
The test this PR is about was added in PR: #11211 |
Which jobs are flaking?
At least pull-cluster-api-test-main
Which tests are flaking?
sigs.k8s.io/cluster-api/internal/controllers/machineset: TestMachineSetReconciler_syncReplicas_WithErrors/should_hold_off_on_sync_replicas_when_create_Infrastructure_of_machine_failed_
Since when has it been flaking?
Since we merged this unit test
Testgrid link
No response
Reason for failure (if possible)
The race detector found a data race: https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/kubernetes-sigs_cluster-api/11718/pull-cluster-api-test-main/1881643904957157376
Anything else we need to know?
Test was merged just yesterday
Label(s) to be applied
/kind flake
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
The text was updated successfully, but these errors were encountered: