-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Strategy to re-use existing binary ground truth for enriching contrast-agnostic model #84
Comments
Agree.
|
i like |
Me too!
Exactly! Let's wait for @sandrinebedard opinion. I will then update our convention. |
also tagging @mguaypaq who is versed into bids |
I agree with |
Some thoughts:
|
I guess we concluded to change Mathieu's point about |
First result with fixed kernel:This is a slice where of the So, taking the binary GT and doing a dilation with a fixed kernel (see Notebook ), we can obtain a soft mask that keeps the CSA measure (image 5 - 6). Performing this procedure on all slices, we observe a preserved CSA between GT bin and GT soft Here is the QC sub-MRS001 I will continue to investigate the entire database to see if there is a significant difference between the CSA from GT bin and soft. |
This is excellent @Nilser3 ! I think it's the way to go |
As previously reported, Legend of QC maks
Comparison of bin - soft CSA in 12 patients (M0 and M24)In the QC there is a shrink effect for the soft mask, but it is only visual, here is the real image: |
We have two params: the sigma of the gaussian kernel (that creates the softness of the GT), and the dilation/erosion to apply (to match the CSA of the contrast-agnostic model). I suggest we fix the first one (ie: sigma), and then find the appropriate dilation/erosion to match the CSA. Question: how do we find the appropriate sigma? |
Adding my two cents here -- recently, I was trying to play with dilation/erosion and gaussian kernel by applying them to the lesion masks in the context of ms lesion augmentation. I used this approach and saw that softness preservation was okay. What I learned is that it was easier to fix binary dilation to the default values given in scipy and tweak the sigma value to match the softness we want. |
Thank you for chipping in. What limitation of that approach (ie: binary morphomath followed by smoothing) is when the GT itself is already smooth. We currently don't have to deal with that, though, but we might in the future (eg: if we need to re-calibrate our GT).
What makes it easier? |
Ah right! one important note in my approach is that the mask was not soft. so what you described is indeed a limitation
oh it's just one less hyperparameter to think about (i.e. the structuring element for the dilation) |
ok, so you first calibrate, and then you smooth? so there is a risk that after smoothing, the CSA is changed |
In the context of the problem discussed in this issue -- Yes. But, my experiments were not concerning CSA at all. They were for simply smoothing the lesion (ie. preserving PVE) after a lesion has been copied from a patient to a healthy subject. And I also had to use binarized GT for (nnUNet) training -- the dilation and smooth were only to preserve PVE not to create soft masks. With my initial comment, I just wanted to refer to some code to provide a starting point/direction, sorry if it is going off topic already 😅 |
GOAL: find the appropriate smoothing kernel see also: #84 (comment)
|
Thanks you @jcohenadad Ok, following this summary, I have applied the Using a kernel as:
After testing different combinations of factor_a - factor_b - factor_chere the minimal MSE scores (between The minimumal MSE was: 1.620882622212772e-05 , and MI : 1.7332167484902985 (MSE and MI calculated in 3D masks) for a factor_a = 25, factor_b = 0 and factor_c = 39 Here the results:
|
nice!
|
very nice investigations @Nilser3 ! few comments:
|
Just a thought of a wild idea (need to flush out the details later) Instead of us trying to find an appropriate smoothing kernel to go from hard --> soft mask, what if we can train a DL model to do that for us? Pros: (1) learning kernels are what DL models are very good at, so we'd rather outsource it through a model (2) we have good quality, manually corrected binary labels (so data size is not a problem). Model inputs: binary labels I might be missing some obvious things, any suggestions are welcome! EDIT: this is also assuming that we're not using contrast-agnostic model predictions anywhere (i.e. the dataset/contrasts for which we want to improve contrast-agnostic model on, already has QC'd binary labels) |
This is an interesting approach. My only concern is that we can assess CNNs performance based on a certain data distribution and test set. What if we attach to much 'trust' to the produced CNNs, and one day we blindly apply it a binary segmentations which produce wrong soft ones (eg: because the input resolution is drastically different). A smoothing kernel is less 'opaque' in terms of interpretability. That being said, I'm open to this idea, but I need to be convinced it works as expected in many different conditions. |
Thank you for your comments @jcohenadad, @naga-karthik Continuing my explorations using kernels, I propose:
I have this script for this propose, here some results in different modalities (resolutions): MP2RAGE: res = 1.0x0.937x0.937mm
T2w: res = 0.8x0.5x0.5
3T T2star: res = 0.437x0.437x5.0
STIR: res = 0.7x0.7x3.0
Kernel used (11x11)Note:
|
Closing this issue as we have identified other ways enriching the contrast agnostic model. Summary of key points:
|
There are a few projects where binary ground truths of good quality already exist. They have been reviewed by a human and are reliable to use for training. However, given that the original mask was created using
sct_deepseg_sc
, there is an over/under segmentation. Moreover, the mask is binary and we'd rather enrich the contrast-agnostic model using soft mask, in order to avoid reducing the softness of the model prediction (@naga-karthik observed it in previous experiments).One possible strategy, is to:
sub-XXX_T1w_label-SC_seg-soft.nii.gz
. Orsub-XXX_T1w_label-SC_probseg.nii.gz
(although I find the latest one less intuitive, maybe we should revisit our convention @valosekj @sandrinebedard)The text was updated successfully, but these errors were encountered: