-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade Rancher #19
Upgrade Rancher #19
Conversation
647841e
to
1d8b9a6
Compare
Signed-off-by: Atanas Dinov <[email protected]>
1d8b9a6
to
4f5dcf2
Compare
@@ -9,6 +9,7 @@ type Release struct { | |||
type Components struct { | |||
Kubernetes Kubernetes `yaml:"kubernetes"` | |||
OperatingSystem OperatingSystem `yaml:"operatingSystem"` | |||
Rancher HelmChart `yaml:"rancher"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still fuzzy on how this will look like, but we can revisit it for the next chart.
@@ -172,5 +209,28 @@ func (r *UpgradePlanReconciler) SetupWithManager(mgr ctrl.Manager) error { | |||
len(e.ObjectOld.(*upgradecattlev1.Plan).Status.Applying) != 0 | |||
}, | |||
})). | |||
Watches(&batchv1.Job{}, handler.EnqueueRequestsFromMapFunc(r.findUpgradePlanFromJob), builder.WithPredicates(predicate.Funcs{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, the Helm controller doesn't really offer us anything to work on for the HelmChart resource so we must watch the Jobs that are spawned from it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sanity check me on the process:
- We locate Jobs that have been created by the helm controller (e.g. have the controller label)
- From the label we locate the
HelmChart
resource - From the
HelmChart
resources we check whether it has the upgrade plan annotation - If it does, we return the plan location
- From there reconciliation is triggered on when the Job reaches a finished status (e.g. Completed or Failed)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I left out some comments and will be happy to approve once we discuss them.
Apart from the comments below, I think it is fair to say that assuming that the upgrade has finished successfully by looking only at the helm controller Job would not be enough in the long term.
Probably every chart component would have to have its separate logic validating whether its resources are running on the desired upgrade version. For example, for Rancher we might have to look at the server-version
property that can be displayed using this command: kubectl get settings.management.cattle.io server-version
.
Nevertheless, the above statements would need to be handled in a different PR later down the line once we have something relatively stable.. Great job on this!
@@ -172,5 +209,28 @@ func (r *UpgradePlanReconciler) SetupWithManager(mgr ctrl.Manager) error { | |||
len(e.ObjectOld.(*upgradecattlev1.Plan).Status.Applying) != 0 | |||
}, | |||
})). | |||
Watches(&batchv1.Job{}, handler.EnqueueRequestsFromMapFunc(r.findUpgradePlanFromJob), builder.WithPredicates(predicate.Funcs{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sanity check me on the process:
- We locate Jobs that have been created by the helm controller (e.g. have the controller label)
- From the label we locate the
HelmChart
resource - From the
HelmChart
resources we check whether it has the upgrade plan annotation - If it does, we return the plan location
- From there reconciliation is triggered on when the Job reaches a finished status (e.g. Completed or Failed)
helm upgrade
exits very fast but it takes time for the actual deployment (and in turn Rancher instance) to report it is ready