Replies: 1 comment
-
Hi @K-Tett , thanks for the question. Either in or out of Jupyter notebook, if we need to see the 3D volume, we need a visualization tool to open the images (e.g., 3D NIFT images). But you could use the model and do inference without 3D Slicer, just use command lines of course. By doing this, you could prepare a python script for inference/test, load the model, and use an inferer (e.g., slidingWindowInferer). We have lots of examples/tutorials here: https://github.com/Project-MONAI/tutorials/blob/main/3d_segmentation/spleen_segmentation_3d.ipynb There is a section about evaluation on original images. Thanks |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi Monai Team,
I'd like to know how to use the trained model to predict labels in Jupyter notebook and deploy it as a standalone app. The trained model has been annotated, submitted, and trained on the 3D slicer. I'm not sure if I can use the trained model elsewhere besides a visualization tool like a 3D slicer.
Excuse my English.
Any help is appreciated. Thank you in advance.
Beta Was this translation helpful? Give feedback.
All reactions