-
Notifications
You must be signed in to change notification settings - Fork 499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow configuration MaxGracefulTerminationSec flag on ClusterAutoscaler #4697
Allow configuration MaxGracefulTerminationSec flag on ClusterAutoscaler #4697
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
@prashanth26 do you plan to tackle @ialidzhikov's suggestion?
Apologies. I somehow forgot about the PR. Took in suggested changes. |
@prashanth26 Can you check why the failing |
Will check it out. |
Co-authored-by: Ismail Alidzhikov <[email protected]>
af0bf15
to
e9bf065
Compare
One of the test variables wasn't renamed. I made the change and now the pipeline passes. PTAL. Also squashed all the changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
…ardener#4697) Co-authored-by: Ismail Alidzhikov <[email protected]> Co-authored-by: Ismail Alidzhikov <[email protected]>
…ardener#4697) Co-authored-by: Ismail Alidzhikov <[email protected]> Co-authored-by: Ismail Alidzhikov <[email protected]>
…ardener#4697) Co-authored-by: Ismail Alidzhikov <[email protected]> Co-authored-by: Ismail Alidzhikov <[email protected]>
How to categorize this PR?
/area auto-scaling
/kind enhancement
What this PR does / why we need it:
During scale down of nodes in the cluster, the cluster autoscaler waits for a maximum of 10 minutes for any pod(s) to wait for graceful termination. However, this is a configurable flag at the cluster autoscaler to let the shoot owners decide the maximum time to wait while draining the node.
Which issue(s) this PR fixes:
Fixes #4695
Special notes for your reviewer:
The eventual plan would be to delegate the task of draining completely to the cluster autoscaler. In the meanwhile, this enhancement would give some breathing space for end-users to have a handle.
Release note: