diff --git a/README.md b/README.md index 72a4df970..7e7ca7aaf 100644 --- a/README.md +++ b/README.md @@ -316,22 +316,22 @@ This tutorial shows the use cases of training and validating a 3D Latent Diffusi ##### [2D latent diffusion model](./generative/2d_ldm) This tutorial shows the use cases of training and validating a 2D Latent Diffusion Model. -#### [Brats 3D latent diffusion model](./3d_ldm/README.md) +##### [Brats 3D latent diffusion model](./3d_ldm/README.md) Example shows the use cases of training and validating a 3D Latent Diffusion Model on Brats 2016&2017 data, expanding on the above notebook. -#### [MAISI 3D latent diffusion model](./maisi/README.md) +##### [MAISI 3D latent diffusion model](./maisi/README.md) Example shows the use cases of training and validating Nvidia MAISI (Medical AI for Synthetic Imaging) model, a 3D Latent Diffusion Model that can generate large CT images with paired segmentation masks, variable volume size and voxel size, as well as controllable organ/tumor size. -#### [SPADE in VAE-GAN for Semantic Image Synthesis on 2D BraTS Data](./spade_gen) +##### [SPADE in VAE-GAN for Semantic Image Synthesis on 2D BraTS Data](./spade_gen) Example shows the use cases of applying SPADE, a VAE-GAN-based neural network for semantic image synthesis, to a subset of BraTS that was registered to MNI space and resampled to 2mm isotropic space, with segmentations obtained using Geodesic Information Flows (GIF). -#### [Applying Latent Diffusion Models to 2D BraTS Data for Semantic Image Synthesis](./spade_ldm) +##### [Applying Latent Diffusion Models to 2D BraTS Data for Semantic Image Synthesis](./spade_ldm) Example shows the use cases of applying SPADE normalization to a latent diffusion model, following the methodology by Wang et al., for semantic image synthesis on a subset of BraTS registered to MNI space and resampled to 2mm isotropic space, with segmentations obtained using Geodesic Information Flows (GIF). -#### [Diffusion Models for Implicit Image Segmentation Ensembles](./image_to_image_translation) +##### [Diffusion Models for Implicit Image Segmentation Ensembles](./image_to_image_translation) Example shows the use cases of how to use MONAI for 2D segmentation of images using DDPMs. The same structure can also be used for conditional image generation, or image-to-image translation. -#### [Evaluate Realism and Diversity of the generated images](./realism_diversity_metrics) +##### [Evaluate Realism and Diversity of the generated images](./realism_diversity_metrics) Example shows the use cases of using MONAI to evaluate the performance of a generative model by computing metrics such as Frechet Inception Distance (FID) and Maximum Mean Discrepancy (MMD) for assessing realism, as well as MS-SSIM and SSIM for evaluating image diversity. #### [VISTA2D](./vista_2d) diff --git a/generation/README.md b/generation/README.md index 351d57196..ddfba959a 100644 --- a/generation/README.md +++ b/generation/README.md @@ -28,7 +28,7 @@ Example shows the use cases of training and validating a 3D Latent Diffusion Mod ## [MAISI 3D latent diffusion model](./maisi/README.md) Example shows the use cases of training and validating Nvidia MAISI (Medical AI for Synthetic Imaging) model, a 3D Latent Diffusion Model that can generate large CT images with paired segmentation masks, variable volume size and voxel size, as well as controllable organ/tumor size. -## [SPADE in VAE-GAN for Semantic Image Synthesis on 2D BraTS Data](./spade_gen/spade_gen.ipynb) +## [SPADE in VAE-GAN for Semantic Image Synthesis on 2D BraTS Data](./spade_gan/spade_gan.ipynb) Example shows the use cases of applying SPADE, a VAE-GAN-based neural network for semantic image synthesis, to a subset of BraTS that was registered to MNI space and resampled to 2mm isotropic space, with segmentations obtained using Geodesic Information Flows (GIF). ## [Applying Latent Diffusion Models to 2D BraTS Data for Semantic Image Synthesis](./spade_ldm/spade_ldm_brats.ipynb) diff --git a/runner.sh b/runner.sh index 5bae8855c..61cb0288c 100755 --- a/runner.sh +++ b/runner.sh @@ -130,7 +130,6 @@ skip_run_papermill=("${skip_run_papermill[@]}" .*nuclick_training_notebook.ipynb skip_run_papermill=("${skip_run_papermill[@]}" .*nuclei_classification_infer.ipynb*) # https://github.com/Project-MONAI/tutorials/issues/1542 skip_run_papermill=("${skip_run_papermill[@]}" .*nuclick_infer.ipynb*) # https://github.com/Project-MONAI/tutorials/issues/1542 skip_run_papermill=("${skip_run_papermill[@]}" .*unet_segmentation_3d_ignite_clearml.ipynb*) # https://github.com/Project-MONAI/tutorials/issues/1555 -skip_run_papermill=("${skip_run_papermill[@]}" .*3d_image_transforms.ipynb*) # https://github.com/Project-MONAI/tutorials/issues/1698 skip_run_papermill=("${skip_run_papermill[@]}" .*vista_2d_tutorial_monai.ipynb*) # output formatting diff --git a/vista_2d/vista_2d_tutorial_monai.ipynb b/vista_2d/vista_2d_tutorial_monai.ipynb index 93a9d736a..026f9bf56 100644 --- a/vista_2d/vista_2d_tutorial_monai.ipynb +++ b/vista_2d/vista_2d_tutorial_monai.ipynb @@ -65,7 +65,7 @@ "!python -c \"import ipykernel\" || pip install -q ipykernel\n", "!python -c \"import cv2\" || pip install -q opencv-python-headless\n", "!python -c \"import tqdm\" || pip install -q tqdm\n", - "!python -c \"import numba\" || pip installß -q numba\n", + "!python -c \"import numba\" || pip install -q numba\n", "!python -c \"import segment_anything\" || pip install -q git+https://github.com/facebookresearch/segment-anything.git\n", "%matplotlib inline" ]