Replies: 1 comment 1 reply
-
Hi @matt3o, from your description, I believe you simply want to perform a metric calculation. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey guys! I am currently converting my Sliding Window DeepEdit code to run on MONAILabel for a user study.
As this part now appears to be working, I now want to evaluate the newly created labels vs the original labels. This means I have run the network in MONAILabel, got some prediction, refined it and saved it again as the new label / prediction.
Now I do want to evaluate how well those new labels are vs the original ones in terms of Dice. I have been using SupervisedTrainer and SupervisedEvaluator for the training and validation runs on the original labels.
However this is no longer viable since now I don't want the Evaluator to run my network but instead to just calculate the Dice based on the already available data.
What would be a good and sensible approach to implement such a validation run?
If possible I would like to reuse the SupervisedEvaluator code (most importantly the metrics, to be 100% sure they are the same), so I was wondering if I can do an empty run without a network, where I only set the
data['pred'] = new_label_file
. Maybe even as a new child class of Evaluator whereas I overwrite the _iteration() to do nothing?Beta Was this translation helpful? Give feedback.
All reactions