-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Carefully use a status context on actions #362
Carefully use a status context on actions #362
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @addyess. ruff
bits were a bit rough to absorb in context, but it's an overall better charm for it. Invoking the reconciler after a successful active action is 💯
I appreciate the deep dive into solving this upgrade ux; props to Pedro for helping get to the root cause!
kubernetes_snaps.upgrade_snaps(channel=channel, event=event, control_plane=True) | ||
if isinstance(charm.unit.status, ops.ActiveStatus): | ||
# After successful upgrade, reconcile the charm to ensure it is in the desired state | ||
charm.reconciler.reconcile(event) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️
with status.context(event.framework.model.unit): | ||
kubernetes_snaps.service_restart("snap.kube-apiserver.daemon") | ||
event.set_results({"api-server": {"status": "restarted"}}) | ||
kubernetes_snaps.service_restart("snap.kube-controller-manager.daemon") | ||
event.set_results({"controller-manager": {"status": "restarted"}}) | ||
kubernetes_snaps.service_restart("snap.kube-scheduler.daemon") | ||
event.set_results({"kube-scheduler": {"status": "restarted"}}) | ||
try: | ||
for service, snap in SERVICES.items(): | ||
kubernetes_snaps.service_restart(snap) | ||
event.set_results({service: {"status": "restarted"}}) | ||
except Exception as e: | ||
event.fail(str(e)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kwmonroe in case you missed it -- this runs this action in a try/catch rather than using status.context
which would then require us to reconcile again. I think this is a better approach to what the action intends
@@ -132,6 +134,7 @@ def __init__(self, *args): | |||
|
|||
# register charm actions | |||
actions = [ | |||
self.on.restart_action, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This actually REGISTERS the restart action -- it wasn't ever handled by the charm code !?!
* Don't use a status context on actions * Giant batch of ruff reformatting * Upgrade-action must use a context then reconcile if successful
Overview
Addresses LP#2077189
Charm actions shouldn't use
status.context
because all that happens is that it ignores all exceptions and sets the unit status to Active/IdleBut the
upgrade
action has to use it sincekubernetes_snaps
depends on using thestatus.add
method, so we have to carefully manage itDetails
ruff
wants to test things harder.status.context
wipes out the charm's unit.statusstatus.context
is used outside of a reconciled action, then we must run the reconciler again