-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError when "brats_mri_segmentation_v0.2.1" from monaibundle is used. #1051
Comments
HI @PranayBolloju , For the BRATS bundle, each data contains 4 channels as input volume. The brats_mri_segmentation_v0.2.1 needs a pre-processing step for BRATS data later than 2018. Thanks for reporting this. We'd better to add note in the bundle Readme or MONAI Label side to remind users on pre-processing BRATS data. Hope this helps to solve you problem. |
Hi @tangy5 , Thanks for the response. Can you suggest a way to preprocess the data i.e. transpose images? |
@tangy5, does the input to the bundle brats_mri_segmentation_v0.2.1 need to be as channel first? Do the transforms AsChannelFirstd or AsChannelLastd help? Perhaps we only need to add this argument when loading the images: https://github.com/Project-MONAI/MONAI/blob/dev/monai/transforms/io/dictionary.py#L128 Here is where this can be added: https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/inference.json#L37 as well as in training: https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/train.json#L59 |
Hi @PranayBolloju, I have tried myself this model and I've got the same error. I've also changed the LoadImage args and managed to get a prediction. I think the quality of the model can be easily improved. Please watch this video: multi-modality-orientation.mp4One thing you could do is first update both the inference and train files (add ensure_channel_first arg) and then re-train the model using the Task01_BrainTumour dataset. Please follows these steps: #1055 (comment) BTW, there is another unsolved issue regarding multimodality/multiparametric images in Slicer. When a NIfTI file has more than one modality, Slicer reads only one. NIfTI can be messy and that's why I make Slicer not consider the orientation. Ugly solution :/ MONAI Label does support multiparametric, but Slicer can't read multiple images when loaded in a single NIfTI image. More of this here: #729 (comment) |
Thanks @diazandr3s , I got same of loading multimodality data in Slicer. You solution looks good, we might need to add to monaibundel Readme on using the BRATS bundle, both the images and reminder of Slicer loading multi-channel images. |
Hi @diazandr3s Thanks for the video. I have tried the suggestions and got the prediction. The segmentation looks fine in 3D but nothing comes up in the other slides. |
Thanks for the update, @PranayBolloju As you can see from the video (minute ~1:11), I proposed an ugly solution (discard orientation) for MONAI Label to load the multimodality images in Slicer. I was wondering if all is absolutely needed all modalities for your use case. Otherwise, I'd suggest working with a single modality as it avoids this change from the Slicer module perspective. Let us know |
Hi @diazandr3s , If it is possible to do Brain Hemorrhage or Tumor segmentation with equal accuracy when single modality or multimodality images are used, then I suppose we don't need to use multimodality images. |
Hi @PranayBolloju, Brain Hemorrhage and Tumor segmentation are two different tasks and they use different image modalities. AFAIK, for brain hemorrhage segmentation you employ CT images while for brain tumor segmentation MR images are more commonly used. |
Hi @diazandr3s , Thanks for the insights. Is there any model available for Brain hemorrhage segmentation separately or can we use the same model used for tumor segmentation? |
Although no brain hemorrhage segmentation model (using CT images) is available in MONAI Label, it shouldn't be difficult for you to create one from a public dataset like this one: https://instance.grand-challenge.org/ You may find this useful as well: #1055 (comment) Regarding brain tumor segmentation model (using MR images), you could the same Task01_BrainTumour but with a single modality. Hope this helps, |
Hi @diazandr3s Thanks a lot for this information. The dataset you have provided the link to says its a forbidden dataset. Is there a way to get a dataset perhaps with label (hemorrhage) segmentation? |
That's strange. |
Hi @diazandr3s |
Hi @PranayBolloju, I'd suggest you try another dataset like this one: https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection |
Hi @diazandr3s , Many thanks for the suggestions, I have seen that dataset too, but it does not contain 3D images and also it does not have annotations. We would have to annotate hemorrhages by ourselves which might lead to wrong labeling. I was hoping to get a dataset already annotated by experts like the Task01_BrainTumor dataset or INSTANCE 2022 dataset. In case I don't find any pre-annotated dataset, as the last resort I will attempt to label the segmentations using 3D slicer. There are couple of questions in this section.
|
Hi @PranayBolloju, Regarding this:
I fully understand. I hope the challenge organizers reply soon. That will facilitate things a lot.
Currently, MONAI Label does not support DICOM images on a local folder. There are two options here: 1/ Convert the images to NRRD or NIfTI format and then work on a local folder or 2/ use a DICOM Web server.
MONAI Label has examples for 2D segmentation such as the endoscopy and pathology app. The question is which viewer you want to integrate MONAI Label with. You could also modify the radiology app to work on 2D as well. Please see discussion: #829 Hope this helps, |
Hi @diazandr3s , Thank you for all the suggestions, it really helped. I was following your suggestion and converted some images from NIFTI to DICOM using plastimatch. I have added 'ensure_channel_first' arg in inference.json in monaibundle\brats_mri_segmentation_v0.2.1\configs. Then I have started the monailabel server using this command: I was able to see the images stored in Google DICOM web server in 3D slicer but when I tried to run inference I got the following error. The same model was doing tumor segmentation perfectly when using local images i.e NIFTI. |
Hi @PranayBolloju, Thanks for the update. Did you make sure the DICOM images are multiparametric? I mean, does the input have the 4 modalities needed for the pretrained model? I believe this is why you're getting this error. Hope this helps, |
Hi @diazandr3s , Thanks for the reply I think the images converted to DICOM are not with 4 modalities. I have tried 2 ways to convert the images.
Is there a way to preserve the modality when converted to DICOM? |
BRATS or Task01_BrainTumour are highly preprocessed datasets. They are skull-stripped and modality co-registered. It is not easy to find a similar dataset with these characteristics. I'm not sure about this, but I think you can't save all modalities in a single DICOM file. |
BRATS or Task01_BrainTumour are highly preprocessed datasets. They are skull-stripped and modality co-registered. It is not easy to find a similar dataset with these characteristics. I'm not sure about this, but I think you can't save all modalities in a single DICOM file. @wyli do you know if this is possible? Can we store 4 modalities in a single DICOM file? |
Hi @diazandr3s Thanks for the clarification. I went ahead and trained a model with the converted images(i.e images that converted to single modality). The following are the changes I made in config files before training the model.
The model was trained successfully with 300 epochs and with average dice score of around 81 . But when I tried inference, only one of the label was being segmented. Is there anything I have missed here? |
Hi @PranayBolloju, Thanks for the update. It's good to see these results. Does this happen to all test cases? Which modality did you use here? Bear in mind that the tumor core (necrotic area) and edema (whole tumor) are visible on the other modalities (T1 + Contrast, T2, etc). That's mainly the reason for using different modalities. |
Hi @diazandr3s These are a couple of images used for this model. And these are the labels. And a similar thing is happening while doing inference with pretrained model from monaibundle i.e brats_mri_segmentation_v0.2.1 on "Task01_BrainTumour" dataset. All three labels can be seen in segment editor but only one label is visible in the mask. |
Thanks for clarifying this, @PranayBolloju. It seems this issue comes from the post-processing transforms: Please change this argument (https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/inference.json#L76) to softmax=true and this (https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/inference.json#L90) to argmax=true They should work like this: https://github.com/Project-MONAI/MONAILabel/blob/main/sample-apps/radiology/lib/infers/deepedit.py#L118-L119 It seems the network is outputting 3 channels but only one is being shown in Slicer. Please let me know how that goes. |
Hi @diazandr3s |
Hi @PranayBolloju, As I mentioned before, this bundle was designed to output three channels, one per label. 3D Slicer only takes the first one. I've checked the training process and it seems it was designed to work like that - sigmoid per channel and to have one-hot representation of the output. See the training transforms: https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/train.json#L153-L160 I initially thought previous changes could solve the issue. But as the model wasn't trained using the softmax activation function, you get the result you're showing. @tangy5 can you please confirm this? A solution for this is to keep the transforms as is and add another post transform that merges all three channels before this one: https://github.com/Project-MONAI/model-zoo/blob/dev/models/brats_mri_segmentation/configs/inference.json#L92 Another solution is to use the deepedit or segmentation model. Here are the instructions: https://www.youtube.com/watch?v=3HTh2dqZqew&list=PLtoSVSQ2XzyD4lc-lAacFBzOdv5Ou-9IA&index=3 Hope this helps, |
Hi @diazandr3s
These are example files that I used for training. Only one the labels was being segmented. Do you think the model can be improved with more number of epochs and more images ? Or should I change the network being used or network definition? |
Hi @PranayBolloju, I'd suggest the following:
Deepedit uses the whole image for training and inference, while the segmentation model uses patches. Sorry, I've totally forgotten I've developed a model for BRATS. Please use this radiology app: https://github.com/Project-MONAI/MONAILabel/tree/bratsSegmentation/sample-apps/radiology There you have the brats algo. Just uncomment these lines and comment the others: https://github.com/Project-MONAI/MONAILabel/blob/bratsSegmentation/sample-apps/radiology/lib/configs/segmentation_brats.py#L33 You could download that radiology app and train the model. Let me know how that goes, |
Where you have downloaded the app? There r 4 different apps (sample apps).. check if dir 'monaibundle' exits from where you r running the command |
Also note.. bundles work good on Linux version.. as sometimes they have bash scripts.. specially training.. however infer you can still on windows using bundle via monailabel |
@SachidanandAlle Thank you very much. And I added code that is "ensure_channel_first" : true' in Inference.json and train.json, but I occurred error that is Failed to run inference in MONAI Label Server. What should I do? |
start with simple spleen one.. brain mri input has some 4 channels.. and possibly model is trained over 3 or vice versa.. |
also u need to check the error on the server side.. there will be a descriptive log for each of those steps.. that should give fair amount of information.. what's happening.. why it's happening |
@SachidanandAlle OK. I will try. |
Dear all members I’m working auto segmentation with brats_mri_segmentation_v0.2.1 in 3D-Slicer. When I conduct server start, I used command that is ‘monailabel start_server --app apps/monaibundle --studies datasets/Task01_BrainTumour/imagesTr --conf models brats_mri_segmentation_v0.2.1’. I added code that is ’ “ensure_channel_first”: true ’ in “preprocessing” part in Inference.json of monaibundle. But it occurs error that is ‘Failed to run Inference in MONAI Label Server’. Does it have solution? Train.json also need to edit, but I don’t know where it adds code. please, let me know solution. Detailed error is as following. [3D-Slicer error] This will close current scene. Please make sure you have saved your current work. |
Hi @ulphypro, As mentioned before, having 4 modalities in a single nifti file does not make much sense: #1051 (comment) I'd recommend the same as @SachidanandAlle: #1051 (comment) Unfortunately, the monaibundle for brats (brats_mri_segmentation_v0.2.1) needs more work to properly manage the 4 modalities and be used in Slicer. It currently works in MONAI Core only. Hope that makes sense, |
Dear @diazandr3s Thank you for answering my question. Then, shall I edit in_channel->4 into 1, out_channel ->3 into 1 and shall I run using only one target arugment as following in /configs/inference.json.files? : Also configs/train.json? |
Hi @ulphypro, The issue isn't only the code, but it is also the dataset. Each file should have a single modality, not 4 as it currently has. If you want to use Slicer, you have to separate the 4 modalities or use the original BRATS 2021 dataset - it originally has the 4 modalities separated. Once you have the separated files, I'd recommend using the segmentation app in MONAI Label radiology app: https://github.com/Project-MONAI/MONAILabel/tree/main/sample-apps/radiology Same discussion is happening here: Project-MONAI/model-zoo#239 Hope that makes sense. |
Thank you for answering my question. I downloaded BraTS2021 dataset as you mention. Should I run using apps/radiology with BraTS2021 dataset? After starting monailabel server using command 'monailabel start_server --app apps/radiology --studies datasets/Task01_BrainTumour/imagesTr --conf models segmentation' in Window Powershell, I can't run 3D-Slicer. Because It doesn't support segmentation model associated with brain tumor. Person that I posted in Project-MONAI/model-zoo#239 is also me. |
Hi @ulphypro, It seems you've downloaded Task01 from the medical segmentation decathlon. That dataset is composed of files that contain all four modalities in a single nifti file. This is precisely the issue: #1051 (comment) I'd recommend you download the original BRATS dataset that has the nifti files separated - please check here https://www.med.upenn.edu/cbica/brats2021/ Then you could start training a model from scratch as recommended here: https://www.youtube.com/watch?v=3HTh2dqZqew&list=PLtoSVSQ2XzyD4lc-lAacFBzOdv5Ou-9IA&index=4 I hope that helps, |
Dear @diazandr3s Thank you for answering my question. I ran monailabel using 3D-Slicer as you refer mention. I conducted as following. Please, see process as following.
-BraTS2021_00621.tar -BraTS2021_Training_Data.tar
Question 1. Question 2. |
Hi @ulphypro, A couple of things here:
Hope this helps, |
Dear @diazandr3s Your Answer) Files with this ending _seg.nii.gz are the segmentation ground truth. To run MONAI Label, they shouldn't be in the same folder as the images. ----->I edited code segmentation_brats.py ------->I eliminated anything file in apps/radology/model folder And then, I ran server start and 3D-Slicer again. |
Hi @ulphypro, After getting the next sample, did you press the run auto-segmentation button? For how long have you trained the model? The green region seems a prediction from a model that hasn't been trained. BTW, the images you have in the main folder are of the same and of different modalities (FLAIR, T1, T1ce and T2). Are you sure you want to do that? You're training a model to recognise tumours on multiple modalities at the same time. I'd use a single modality FLAIR, T1ce or T2 but not all of them and use more cases/patients Hope this helps, |
Dear @diazandr3s I'll answer associated with your mention. Now, I'm using one file of sample files(flair, t1, t2 or t1ce), but maybe.. I'm likely to only use t2_nii.gz per one patient later. Then, should I include other t2.nii.gz files (ex: a1_t2.nii.gz, a2_t2.nii.gz, a3_t2.nii.gz,..., and etc.) in one folder? I just pressed 'next sample' button, and then it shows greenbox in 3D view. I didn't nothing except pressing 'Next Sample'. I didn't also trained any model. First, what I want is not to show green box and anything in 3D view when I pressed 'Next Sample' button using MONAI Label module in 3D-Slicer.Second, and then I want to conduct train to extract brain tumor. Third, (so) when I pressed 'Run' button in 'Auto Segmentation' option, brain tumor should be detected its all segment within brain tumor region. That's all. Please, answer my three mention above. |
Hi @ulphypro,
Then you should train the model on T2 only.
Yes, put all T2 images from all patients in the same folder.
It is strange, if you click on Next Sample, you should only see the image. Please make sure this folder is empty: datasets/BraTS2021_Training_Data/BraTS2021_00002/labels/original
It is difficult to say. I'd suggest you train for some epochs (~100) and see how the model performs. |
monailabel for brats segmentation is giving a different error: The server log is below:
|
Hi, I'm up to the training phase but am always getting this error on the 9th epoch. Why is this the case? Val 9/50 221/250 , dice_tc: 0.7409106 , dice_wt: 0.837617 , dice_et: 0.78997755 , time 4.82s
|
HI @EdenSehatAI, From the logs, I see the FLAIR sequence is missing for this patient: BraTS2021_00390_flair
Can you make sure this file is in the folder? |
Have you checked that the file is in the downloaded folder? That's what the error is about. |
Hi @TrushalGulhane, This may be the typical orientation problem. I'm assuming you are using the BRATS files that have the 4 modalities merged into a single NIfTI. Is that correct? In the Slicer MONAI Auto3DSeg we also demonstrated a way of using the BRATS models: https://github.com/lassoan/SlicerMONAIAuto3DSeg Please give it a try. |
Thank you @diazandr3s Is is datasets issue? Is there any available dataset that will be working with brats MRI segmentation model. |
Hi @TrushalGulhane, I am not aware of another dataset similar to BRATS. I'm sure you know this, BRATS dataset is a processed dataset composed of four MR sequence co-registered/aligned. BTW, have you tried using the MONAI Auto3DSeg module in Slicer. There, you can also find the BRATS models for the different tumor types: https://github.com/lassoan/SlicerMONAIAuto3DSeg/ Just download the latest Slicer and then instal Auto3DSeg via the Extension Manager. Hope this helps, |
Describe the bug
MONAI Label server is giving the following error when "brats_mri_segmentation_v0.2.1" is used for brain tumor segmentation.
RuntimeError: Given groups=1, weight of size [16, 4, 3, 3, 3], expected input[1, 240, 240, 240, 160] to have 4 channels, but got 240 channels instead
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Segmentation should be displayed in 3D slicer.
Screenshots
Environment
Ensuring you use the relevant python executable, please paste the output of:
================================
Printing MONAI config...
MONAI version: 1.0.0
Numpy version: 1.22.4
Pytorch version: 1.12.1+cpu
MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False
MONAI rev id: 170093375ce29267e45681fcec09dfa856e1d7e7
MONAI file: C:\Users\Admin\AppData\Local\Programs\Python\Python39\lib\site-packages\monai_init_.py
Optional dependencies:
Pytorch Ignite version: 0.4.10
Nibabel version: 4.0.2
scikit-image version: 0.19.3
Pillow version: 9.2.0
Tensorboard version: 2.10.0
gdown version: 4.5.1
TorchVision version: 0.13.1+cpu
tqdm version: 4.64.0
lmdb version: 1.3.0
psutil version: 5.9.1
pandas version: 1.4.3
einops version: 0.4.1
transformers version: NOT INSTALLED or UNKNOWN VERSION.
mlflow version: NOT INSTALLED or UNKNOWN VERSION.
pynrrd version: 0.4.3
The text was updated successfully, but these errors were encountered: