-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add drain logic for upgrade-controller SUC plans #28
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! I left two minor comments - let me know what you think.
@@ -48,6 +48,15 @@ type UpgradePlanSpec struct { | |||
// ReleaseVersion specifies the target version for platform upgrade. | |||
// The version format is X.Y.Z, for example "3.0.2". | |||
ReleaseVersion string `json:"releaseVersion"` | |||
// +optional | |||
Drain *Drain `json:"drain"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we put a comment describing this in order for the CRD to also include it?
api/v1alpha1/upgradeplan_types.go
Outdated
|
||
type Drain struct { | ||
// +optional | ||
ControlPlanes *bool `json:"controlPlanes"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a nitpick but I believe using a singular form here controlPlane
is the better wording. There are multiple control plane nodes in a cluster, but not multiple control planes.
As discussed previously the upgrade-controller should:
By default the upgrade-controller will execute drain with the following flags:
--delete-emptydir-data
- continue drain even if there are pods utilisingemptyDir
storage--timeout 15m0s
- the amount of time for which to wait for the drain operation to finish. Ensures that the drain operation will not block the upgrade flow--ignore-daemonsets
- ignores DaemonSet-managed pods--force
- continue even if there are pods that do not have a controller (single pods)Due to the nature of how we do upgrades, node drain will happen twice - once during OS upgrades and once during K8s upgrades.
Tested scenarios: