Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

as an app dev/operator I can restart my workload #425

Open
heyjcollins opened this issue Nov 30, 2022 · 0 comments
Open

as an app dev/operator I can restart my workload #425

heyjcollins opened this issue Nov 30, 2022 · 0 comments
Labels

Comments

@heyjcollins
Copy link
Contributor

heyjcollins commented Nov 30, 2022

WIP Issue

Description of problem

There are scenarios where a workload may need to be restarted in order for configuration changes to be instantiated.
For example, when there is a configuration change for the workload applied to Spring Config Server, the app would need to restart to consume and apply the new config itself.

To get said config changes to get picked up currently someone must delete and recreate the workload, wait 'till the pod is replaced for some reason or 'till it gets auto-scaled to name a few.

Since time is always of the essence, providing controls to enabling restart at will in a straightforward way will be valuable.

Proposed solution (TBD)

Given <Some Condition>
When <Something Happens>
Then <This other thing should happen?>

Example

<Code snippets that illustrate the when/then blocks>

Describe alternatives to be considered

  1. delete and recreate the workload
  2. provide a restart command: tanzu apps workload restart workload-name (??possible to provide flags to control rollout strategy, all at once, sequential, batch size, etc...)
    • kubectl rollout restart deploy/XYZ uses a kubectl.kubernetes.io/restartedAt date in sp.c.template.metadata.annotations to trigger a rolling restart. We could certainly enable tanzu apps to do the same across both deployments and Knative Services (it would work the same for both).
  3. Delete the pods, update the Deployment, or use the https://github.com/kubernetes-sigs/descheduler to automatically delete pods following policy.
  4. have a local agent which detects the updated ConfigMap and then locally restarts the container. Unfortunately, that could lead to an outage, as the “restart for config update” would be spread across the 60s window for kubelet updates to ConfigMaps, which might be a bit fast.
  5. create a new ConfigMap and then explicitly update the Deployment to reference the new ConfigMap, which would generate a new application rollout, which would leverage all of the existing “make a rollout safe” settings
    • We’d need to figure out how to represent the latter in a GitOps model, assuming that they are also interested in using GitOps to manage their higher-level application delivery.

Additional context

Comment/concern to address from @paulcwarren:

Technically though it is leaning on k8s quite a lot.  
And the fact that it will restart things to get back to a "desired" state.  
Plus force killing processes can potentially cause bad things to happen.  
So, we'd need to consider the implementation of that command a little and make sure 
it is really what we want to do
  • we could prompt for confirmation from user with an info/warning about potential negative outcomes if run
@heyjcollins heyjcollins added the MIGRATED-MIGRATED Migrated to private repo label Dec 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant