We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What happened: Start cluster with 10K nodes and started to create pods for this cluster. after 300K pods, kubee-scheduler crahed.
What you expected to happen: kube-scheduler not crash How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
commit 04298acdc2ca682bd54306aaa6fd816ba3018e57
cat /etc/os-release
uname -a
The text was updated successfully, but these errors were encountered:
from Sindica:
Here looks like the scheduler restart log:
25304 I0819 01:49:15.482326 1 cache.go:666] Couldn't expire cache for pod system/os8klemekk67w6j2xx68t2n7i9nwzqmx-ns/os8klemekk67w6j2xx68t2n7i9nwzqmx-pods-6744f97b99-jgqjn. Binding is still in progress. 25305 I0819 01:49:15.678020 1 leaderelection.go:281] failed to renew lease kube-system/kube-scheduler: failed to tryAcquireOrRenew context deadline exceeded 25306 E0819 01:49:15.678062 1 server.go:258] lost master 25307 lost lease 25308 I0819 01:49:19.666685 1 feature_gate.go:216] feature gates: &{map[ExperimentalCriticalPodAnnotation:true]}
Sorry, something went wrong.
issue first reported in this issue: https://github.com/futurewei-cloud/arktos/issues/571
No branches or pull requests
What happened:
Start cluster with 10K nodes and started to create pods for this cluster. after 300K pods, kubee-scheduler crahed.
What you expected to happen:
kube-scheduler not crash
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: