You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While there run_on_start and run_on_stop form a nice balance, some run_on_start scripts are actually meant to be balanced against a run_on_destroy trigger. As an example, using the run_on_start scripts to do a git clone of packages for the workspace, you naturally would not want to delete those when the workspace is stopped, however you might want to delete them when the workspace is destroyed.
A git clone is of course not a perfect example as we have terraform modules to do that for us already, and furthermore most providers are relying upon the containerization/virtualization provider to do cleanup when workspaces are destroyed, but if there are persistent resources being managed by coder_script snippets, it would be incredibly handy to have a run_on_destroy trigger to manage their lifecycles properly.
In my case, this is because I am using Coder to provide access directly to machines that have already been provisioned, and I use remote-exec to SSH into the devices and install the coder agent, then coder_script snippets to set up all kinds of state. I would like to clean that up properly upon workspace destruction.
The text was updated successfully, but these errors were encountered:
Thanks for the feedback, @staticfloat. I agree with your use case. We can run these scripts just before destruction, as they are executed by the agent, which will be removed when the workspace is deleted.
Another consideration is what we should do if the script fails. Should we fail the workspace deletion operation and allow a retry? Or should we proceed with the destruction regardless of the output of the coder_script that was executed on run_before_destroy trigger?
I already run into errors during destruction from other providers (hence why administrators can override with the Orphan Resources checkbox) so I think it makes sense to follow suit here as well.
While there
run_on_start
andrun_on_stop
form a nice balance, somerun_on_start
scripts are actually meant to be balanced against arun_on_destroy
trigger. As an example, using therun_on_start
scripts to do agit clone
of packages for the workspace, you naturally would not want to delete those when the workspace is stopped, however you might want to delete them when the workspace is destroyed.A
git clone
is of course not a perfect example as we have terraform modules to do that for us already, and furthermore most providers are relying upon the containerization/virtualization provider to do cleanup when workspaces are destroyed, but if there are persistent resources being managed bycoder_script
snippets, it would be incredibly handy to have arun_on_destroy
trigger to manage their lifecycles properly.In my case, this is because I am using Coder to provide access directly to machines that have already been provisioned, and I use
remote-exec
to SSH into the devices and install thecoder
agent, thencoder_script
snippets to set up all kinds of state. I would like to clean that up properly upon workspace destruction.The text was updated successfully, but these errors were encountered: