-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Boilerplate for triggering ml job run #91
Conversation
Staging application has been deployed and is available at: https://dash5-services.plotly.host/ml-exchange-staging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than the Show output results
comment, the rest looks good
Input("model-check", "n_intervals"), | ||
prevent_initial_call=True, | ||
) | ||
def check_job(job_id, n_intervals): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is redundant because the models will most likely take a long time to finish and the user won't wait for it.
It could be worth adding a notification that shows while they are away (or the first time they come back - would need to track in the DB if the notification has been seen) that would inform them that since their last visit, the ML job(s) has(have) finished and list them.
2 weeks when we talked about showing ML output, we agreed that it would go under Data Selection
, and if the particular project has ML output, there would appear another dropdown where users would be able to select what they wish to view, definitely not a toggle since they are not limited to 1 ML output per project. See #62 I edited the Issue description with the aforementioned requirements for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We discussed having a simple toggle at this week's meeting, since this feature for now is just implemented as an MVP. Will refine that piece further in #88.
Good point about the length of time. But I'd disagree that this function is entirely redundant, but we may want to refine the mechanism by which we're checking for results, depending on the length of time and how we want to manage user concurrency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From Tanny: user ID - get request to the computing API - get all the jobs associated with the user ID and the segmentation - get all the jobs and the status of all these jobs - this happens on page load and happens at an interval.
callbacks/segmentation.py
Outdated
job_uid, | ||
) | ||
else: | ||
data_utils.save_annotations_data(annotation_store, project_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing on Vaughan gives us the following error:
File "/usr/local/lib/python3.9/site-packages/dash/_callback.py", line 450, in add_context
output_value = func(*func_args, **func_kwargs) # %% callback invoked %%
File "/app/callbacks/segmentation.py", line 83, in run_job
data_utils.save_annotations_data(annotation_store, project_name)
File "/app/utils/data_utils.py", line 98, in save_annotations_data
annotations.create_annotation_metadata()
File "/app/utils/annotations.py", line 61, in create_annotation_metadata
self.set_annotation_image_shape(image_idx)
File "/app/utils/annotations.py", line 165, in set_annotation_image_shape
self.annotation_image_shape = self.annotation_store["image_shapes"][0]
KeyError: 'image_shapes'
I suspect this may come from interacting with a Tiled server that has only a single tiff-sequence in it, so we technically never actively selected a project. Interacting with the GUI more (changing slider value, 'selecting' the single project) does remove this error and the attempt to submit the job is made.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could also test: App is loaded, then immediately click the "Run Model" button. And what if the annotation store is empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Wiebke I think this is happening because of this line, where my guess is that DATA_OPTIONS
evaluates to None
, which means that the slider is disabled so this block isn't hit.
I think you're probably right in that this because of a different structure on the Tiled server on your end. What's the structure of the data
variable you get after running:
client = from_uri(TILED_URI, api_key=API_KEY)
data = client["data"]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to have been indeed an issue with our previous local Tiled setup and resolved with the updated population of the project list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can confirm that this runs on my local setup.
This PR adds in two callbacks to handle submitting and monitoring for the results of a segmentation job run. As the computing API is only accessible from within Vaughan, I've added flags around a
MODE="dev"
environment variable to simulate behaviour during local development and on the apps deployed to Plotly's servers.Note the
#TODO
s, which should be picked up by the LBL team who have access to Vaughan. The "Show output results" switch doesn't do anything yet.