Skip to content

Commit

Permalink
Update google link to use shared drive (#1819)
Browse files Browse the repository at this point in the history
Update google link to use shared drive

<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [ ] Avoid including large-size files in the PR.
- [ ] Clean up long text outputs from code cells in the notebook.
- [ ] For security purposes, please check the contents and remove any
sensitive info such as user names and private key.
- [ ] Ensure (1) hyperlinks and markdown anchors are working (2) use
relative paths for tutorial repo files (3) put figure and graphs in the
`./figure` folder
- [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>`

---------

Signed-off-by: YunLiu <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
KumoLiu and pre-commit-ci[bot] committed Sep 9, 2024
1 parent bd0fa9f commit 2af8c12
Show file tree
Hide file tree
Showing 19 changed files with 25 additions and 25 deletions.
2 changes: 1 addition & 1 deletion 3d_classification/densenet_training_array.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@
],
"source": [
"if not os.path.isfile(images[0]):\n",
" resource = \"http://biomedic.doc.ic.ac.uk/brain-development/downloads/IXI/IXI-T1.tar\"\n",
" resource = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/IXI-T1.tar\"\n",
" md5 = \"34901a0593b41dd19c1a1f746eac2d58\"\n",
"\n",
" dataset_dir = os.path.join(root_dir, \"ixi\")\n",
Expand Down
4 changes: 2 additions & 2 deletions 3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@
"\n",
"https://www.synapse.org/#!Synapse:syn27046444/wiki/616992\n",
"\n",
"The JSON file containing training and validation sets (internal split) needs to be downloaded from this [link](https://drive.google.com/file/d/1i-BXYe-wZ8R9Vp3GXoajGyqaJ65Jybg1/view?usp=sharing) and placed in the same folder as the dataset. As discussed in the following, this tutorial uses fold 1 for training a Swin UNETR model on the BraTS 21 challenge.\n",
"The JSON file containing training and validation sets (internal split) needs to be downloaded from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/brats21_folds.json) and placed in the same folder as the dataset. As discussed in the following, this tutorial uses fold 1 for training a Swin UNETR model on the BraTS 21 challenge.\n",
"\n",
"### Tumor Characteristics\n",
"\n",
Expand Down Expand Up @@ -114,7 +114,7 @@
" \"TrainingData/BraTS2021_01146/BraTS2021_01146_flair.nii.gz\"\n",
" \n",
"\n",
"- Download the json file from this [link](https://drive.google.com/file/d/1i-BXYe-wZ8R9Vp3GXoajGyqaJ65Jybg1/view?usp=sharing) and placed in the same folder as the dataset.\n"
"- Download the json file from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/brats21_folds.json) and placed in the same folder as the dataset.\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion 3d_segmentation/swin_unetr_btcv_segmentation_3d.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -331,7 +331,7 @@
"outputs": [],
"source": [
"# uncomment this command to download the JSON file directly\n",
"# wget -O data/dataset_0.json 'https://drive.google.com/uc?export=download&id=1qcGh41p-rI3H_sQ0JwOAhNiQSXriQqGi'"
"# wget -O data/dataset_0.json 'https://developer.download.nvidia.com/assets/Clara/monai/tutorials/swin_unetr_btcv_dataset_0.json'"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@
"## Load useful data\n",
"\n",
"As described in `readme.md`, we manually labeled 1126 frames in order to build the detection model.\n",
"Please download the manually labeled bounding boxes from [google drive](https://drive.google.com/file/d/1iO4bXTGdhRLIoxIKS6P_nNAgI_1Fp_Vg/view?usp=sharing), the uncompressed folder `labels` is saved into `label_14_tools_yolo_640_blur/`."
"Please download the manually labeled bounding boxes from [google drive](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/1126_frame_labels.zip), the uncompressed folder `labels` is saved into `label_14_tools_yolo_640_blur/`."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion deployment/ray/mednist_classifier_ray.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@
"metadata": {},
"outputs": [],
"source": [
"resource = \"https://drive.google.com/uc?id=1zKRi5FrwEES_J-AUkM7iBJwc__jy6ct6\"\n",
"resource = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/deployment/classifier.zip\"\n",
"dst = os.path.join(\"..\", \"bentoml\", \"classifier.zip\")\n",
"if not os.path.exists(dst):\n",
" download_url(resource, dst)"
Expand Down
2 changes: 1 addition & 1 deletion detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Then run the following command and go directly to Sec. 3.2.
python3 luna16_prepare_env_files.py
```

Alternatively, you can download the original data and resample them by yourself with the following steps. Users can either download 1) mhd/raw data from [LUNA16](https://luna16.grand-challenge.org/Home/) or its [copy](https://drive.google.com/drive/folders/1-enN4eNEnKmjltevKg3W2V-Aj0nriQWE?usp=share_link), or 2) DICOM data from [LIDC-IDRI](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=1966254) with [NBIA Data Retriever](https://wiki.cancerimagingarchive.net/display/NBIA/Downloading+TCIA+Images).
Alternatively, you can download the original data and resample them by yourself with the following steps. Users can either download 1) mhd/raw data from [LUNA16](https://luna16.grand-challenge.org/Home/), or 2) DICOM data from [LIDC-IDRI](https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=1966254) with [NBIA Data Retriever](https://wiki.cancerimagingarchive.net/display/NBIA/Downloading+TCIA+Images).

The raw CT images in LUNA16 have various voxel sizes. The first step is to resample them to the same voxel size, which is defined in the value of "spacing" in [./config/config_train_luna16_16g.json](./config/config_train_luna16_16g.json).

Expand Down
2 changes: 1 addition & 1 deletion federated_learning/breast_density_challenge/data/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Example breast density data

Download example data from https://drive.google.com/file/d/1Fd9GLUIzbZrl4FrzI3Huzul__C8wwzyx/view?usp=sharing.
Download example data from https://developer.download.nvidia.com/assets/Clara/monai/tutorials/fl/preprocessed.zip.
Extract here.

## Data source
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ $ fx envoy start --shard-name env_two --disable-tls --envoy-config-path envoy_co
```
[13:48:42] INFO 🧿 Starting the Envoy. envoy.py:53
Downloading...
From: https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE
From: https://developer.download.nvidia.com/assets/Clara/monai/tutorials/MedNIST.tar.gz
To: /tmp/tmpd60wcnn8/MedNIST.tar.gz
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 61.8M/61.8M [00:04<00:00, 13.8MB/s]
2022-07-22 13:48:48,735 - INFO - Downloaded: MedNIST.tar.gz
Expand Down
2 changes: 1 addition & 1 deletion modules/benchmark_global_mutual_information.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@
" os.makedirs(directory, exist_ok=True)\n",
"root_dir = tempfile.mkdtemp() if directory is None else directory\n",
"print(f\"root dir is: {root_dir}\")\n",
"file_url = \"https://drive.google.com/uc?id=17tsDLvG_GZm7a4fCVMCv-KyDx0hqq1ji\"\n",
"file_url = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/Prostate_T2W_AX_1.nii\"\n",
"file_path = f\"{root_dir}/Prostate_T2W_AX_1.nii\"\n",
"download_url(file_url, file_path)"
]
Expand Down
2 changes: 1 addition & 1 deletion modules/engines/gan_training.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
Sample script using MONAI to train a GAN to synthesize images from a latent code.
## Get the dataset
MedNIST.tar.gz link: https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE
MedNIST.tar.gz link: https://developer.download.nvidia.com/assets/Clara/monai/tutorials/MedNIST.tar.gz
Extract tarball and set input_dir variable. GAN script trains using hand CT scan jpg images.
Dataset information available in MedNIST Tutorial
Expand Down
2 changes: 1 addition & 1 deletion modules/public_datasets.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -595,7 +595,7 @@
"outputs": [],
"source": [
"class IXIDataset(Randomizable, CacheDataset):\n",
" resource = \"http://biomedic.doc.ic.ac.uk/\" + \"brain-development/downloads/IXI/IXI-T1.tar\"\n",
" resource = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/IXI-T1.tar\"\n",
" md5 = \"34901a0593b41dd19c1a1f746eac2d58\"\n",
"\n",
" def __init__(\n",
Expand Down
2 changes: 1 addition & 1 deletion modules/resample_benchmark.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,7 @@
"text": [
"\n",
"Downloading...\n",
"From: https://drive.google.com/uc?id=17tsDLvG_GZm7a4fCVMCv-KyDx0hqq1ji\n",
"From: https://developer.download.nvidia.com/assets/Clara/monai/tutorials/Prostate_T2W_AX_1.nii\n",
"To: /tmp/tmp2euy74rf/mri.nii\n",
"100%|██████████| 12.1M/12.1M [00:00<00:00, 210MB/s]"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@
"\n",
" - If you are going to use full dataset of TotalSegmentator, please refer to the dataset link, download the data, create and preprocess the images following [this page](https://zenodo.org/record/6802614).\n",
" \n",
" - In this tutorial, we prepared a sample subset, resampled and ready to use. The subset is only for demonstration. Download [here](https://drive.google.com/file/d/1DtDmERVMjks1HooUhggOKAuDm0YIEunG/view?usp=sharing).\n",
" - In this tutorial, we prepared a sample subset, resampled and ready to use. The subset is only for demonstration. Download [here](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/totalSegmentator_mergedLabel_samples.zip).\n",
" \n",
" To use the bundle, users need to download the data and merge all annotated labels into one NIFTI file. Each file contains 0-104 values, each value represents one anatomy class.\n",
" \n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,6 @@ completed, the dataset can be readily used for the tutorial.
1) Create a new folder named 'monai_data' for downloading the raw data and preprocessing.
2) Download the chest X-ray images in PNG format from this [link](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_png.tgz). Copy the downloaded file (NLMCXR_png.tgz) to 'monai_data' directory and extract it to 'monai_data/dataset_orig/NLMCXR_png/'.
3) Download the reports in XML format from this [link](https://openi.nlm.nih.gov/imgs/collections/NLMCXR_reports.tgz). Copy the downloaded file (NLMCXR_reports.tgz) to 'monai_data' directory and extract it to 'monai_data/dataset_orig/NLMCXR_reports/'.
4) Download the splits of train, validation and test datasets from this [link](https://drive.google.com/u/1/uc?id=1jvT0jVl9mgtWy4cS7LYbF43bQE4mrXAY&export=download). Copy the downloaded file (TransChex_openi.zip)
4) Download the splits of train, validation and test datasets from this [link](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/TransChex_openi.zip). Copy the downloaded file (TransChex_openi.zip)
to 'monai_data' directory and extract it here.
5) Run 'preprocess_openi.py' to process the images and reports.
2 changes: 1 addition & 1 deletion pathology/multiple_instance_learning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ python ./panda_mil_train_evaluate_pytorch_gpu.py -h

Train in multi-gpu mode with AMP using all available gpus,
assuming the training images are in the `/PandaChallenge2020/train_images` folder,
it will use the pre-defined 80/20 data split in [datalist_panda_0.json](https://drive.google.com/drive/u/0/folders/1CAHXDZqiIn5QUfg5A7XsK1BncRu6Ftbh)
it will use the pre-defined 80/20 data split in [datalist_panda_0.json](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/datalist_panda_0.json)

```bash
python -u panda_mil_train_evaluate_pytorch_gpu.py \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -530,7 +530,7 @@ def parse_args():

if args.dataset_json is None:
# download default json datalist
resource = "https://drive.google.com/uc?id=1L6PtKBlHHyUgTE4rVhRuOLTQKgD4tBRK"
resource = "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/datalist_panda_0.json"
dst = "./datalist_panda_0.json"
if not os.path.exists(dst):
gdown.download(resource, dst, quiet=False)
Expand Down
4 changes: 2 additions & 2 deletions pathology/tumor_detection/README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ The license for the pre-trained model used in examples is different than MONAI l

All the data used to train and validate this model is from the [Camelyon-16 Challenge](https://camelyon16.grand-challenge.org/). You can download all the images for the "CAMELYON16" data set from various sources listed [here](https://camelyon17.grand-challenge.org/Data/).

Location information for training/validation patches (the location on the whole slide image where patches are extracted) is adopted from [NCRF/coords](https://github.com/baidu-research/NCRF/tree/master/coords). The reformatted coordinations and labels in CSV format for training (`training.csv`) can be found [here](https://drive.google.com/file/d/1httIjgji6U6rMIb0P8pE0F-hXFAuvQEf/view?usp=sharing) and for validation (`validation.csv`) can be found [here](https://drive.google.com/file/d/1tJulzl9m5LUm16IeFbOCoFnaSWoB6i5L/view?usp=sharing).
Location information for training/validation patches (the location on the whole slide image where patches are extracted) is adopted from [NCRF/coords](https://github.com/baidu-research/NCRF/tree/master/coords). The reformatted coordinations and labels in CSV format for training (`training.csv`) can be found [here](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/pathology_train.csv) and for validation (`validation.csv`) can be found [here](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/pathology_validation.csv).

This pipeline expects the training/validation data (whole slide images) reside in `cfg["data_root"]/training/images`. By default `data_root` is pointing to the code folder `./`; however, you can easily modify it to point to a different directory by passing the following argument in the runtime: `--data-root /other/data/root/dir/`.

> [`training_sub.csv`](https://drive.google.com/file/d/1rO8ZY-TrU9nrOsx-Udn1q5PmUYrLG3Mv/view?usp=sharing) and [`validation_sub.csv`](https://drive.google.com/file/d/130pqsrc2e9wiHIImL8w4fT_5NktEGel7/view?usp=sharing) is also provided to check the functionality of the pipeline using only two of the whole slide images: `tumor_001` (for training) and `tumor_101` (for validation). This dataset should not be used for the real training or any performance evaluation.
> [`training_sub.csv`](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/pathology_train_sub.csv) and [`validation_sub.csv`](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/pathology_validation_sub.csv) is also provided to check the functionality of the pipeline using only two of the whole slide images: `tumor_001` (for training) and `tumor_101` (for validation). This dataset should not be used for the real training or any performance evaluation.
### Input and output formats

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@
"source": [
"### Download data\n",
"\n",
"The pipeline that we are profiling `camelyon_train_evaluate_nvtx_profiling.py` required [Camelyon-16 Challenge](https://camelyon16.grand-challenge.org/) dataset. You can download all the images for \"CAMELYON16\" data set from sources listed [here](https://camelyon17.grand-challenge.org/Data/). Also you can find the coordinations and labels for training (`training.csv`) [here](https://drive.google.com/file/d/1httIjgji6U6rMIb0P8pE0F-hXFAuvQEf/view?usp=sharing) and for validation (`validation.csv`) [here](https://drive.google.com/file/d/1tJulzl9m5LUm16IeFbOCoFnaSWoB6i5L/view?usp=sharing).\n",
"The pipeline that we are profiling `camelyon_train_evaluate_nvtx_profiling.py` required [Camelyon-16 Challenge](https://camelyon16.grand-challenge.org/) dataset. You can download all the images for \"CAMELYON16\" data set from sources listed [here](https://camelyon17.grand-challenge.org/Data/). Also you can find the coordinations and labels for training (`training.csv`) [here](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/pathology_train.csv) and for validation (`validation.csv`) [here](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/pathology_validation.csv).\n",
"\n",
"However, for the demo of this notebook, we are downloading a very small subset of Camelyon dataset, which uses only one whole slide image `tumor_091.tif` .\n"
]
Expand All @@ -107,7 +107,7 @@
"output_type": "stream",
"text": [
"Downloading...\n",
"From: https://drive.google.com/uc?id=1uWS4CXKD-NP_6-SgiQbQfhFMzbs0UJIr\n",
"From: https://developer.download.nvidia.com/assets/Clara/monai/tutorials/tumor_091.annotation\n",
"To: /workspace/Code/tutorials/pathology/tumor_detection/ignite/training.csv\n",
"100%|██████████| 153k/153k [00:00<00:00, 1.75MB/s]\n",
"Downloading...\n",
Expand All @@ -130,7 +130,7 @@
],
"source": [
"# Download training.csv\n",
"dataset_url = \"https://drive.google.com/uc?id=1uWS4CXKD-NP_6-SgiQbQfhFMzbs0UJIr\"\n",
"dataset_url = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/tumor_091.annotation\"\n",
"dataset_path = \"training.csv\"\n",
"gdown.download(dataset_url, dataset_path, quiet=False)\n",
"\n",
Expand All @@ -139,7 +139,7 @@
"image_dir = os.path.join(\"training\", \"images\", \"\")\n",
"if not os.path.exists(image_dir):\n",
" os.makedirs(image_dir)\n",
"image_url = \"https://drive.google.com/uc?id=1OxAeCMVqH9FGpIWpAXSEJe6cLinEGQtF\"\n",
"image_url = \"https://developer.download.nvidia.com/assets/Clara/monai/tutorials/tumor_091.tif\"\n",
"gdown.download(image_url, image_dir, quiet=False)"
]
},
Expand Down
4 changes: 2 additions & 2 deletions performance_profiling/pathology/profiling_train_base_nvtx.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ For training and validation steps, they are easier to track by setting NVTX anno

## Data Preparation

The pipeline that we are profiling `train_evaluate_nvtx.py` requires the [Camelyon-16 Challenge](https://camelyon16.grand-challenge.org/) dataset. You can download all the images for the "CAMELYON16" data set from the sources listed [here](https://camelyon17.grand-challenge.org/Data/)](https://camelyon17.grand-challenge.org/Data/). Location information for training/validation patches (the location on the whole slide image where patches are extracted) is adopted from [NCRF/coords](https://github.com/baidu-research/NCRF/tree/master/coords). The reformatted coordinations and labels in CSV format for training (`training.csv`) can be found [here](https://drive.google.com/file/d/1httIjgji6U6rMIb0P8pE0F-hXFAuvQEf/view?usp=sharing) and for validation (`validation.csv`) can be found [here](https://drive.google.com/file/d/1tJulzl9m5LUm16IeFbOCoFnaSWoB6i5L/view?usp=sharing).
The pipeline that we are profiling `train_evaluate_nvtx.py` requires the [Camelyon-16 Challenge](https://camelyon16.grand-challenge.org/) dataset. You can download all the images for the "CAMELYON16" data set from the sources listed [here](https://camelyon17.grand-challenge.org/Data/)](https://camelyon17.grand-challenge.org/Data/). Location information for training/validation patches (the location on the whole slide image where patches are extracted) is adopted from [NCRF/coords](https://github.com/baidu-research/NCRF/tree/master/coords). The reformatted coordinations and labels in CSV format for training (`training.csv`) can be found [here](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/pathology_train.csv) and for validation (`validation.csv`) can be found [here](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/pathology_validation.csv).

> [`training_sub.csv`](https://drive.google.com/file/d/1rO8ZY-TrU9nrOsx-Udn1q5PmUYrLG3Mv/view?usp=sharing) and [`validation_sub.csv`](https://drive.google.com/file/d/130pqsrc2e9wiHIImL8w4fT_5NktEGel7/view?usp=sharing) is also provided to check the functionality of the pipeline using only two of the whole slide images: `tumor_001` (for training) and `tumor_101` (for validation). This dataset should not be used for the real training.
> [`training_sub.csv`](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/pathology_train_sub.csv) and [`validation_sub.csv`](https://developer.download.nvidia.com/assets/Clara/monai/tutorials/pathology_validation_sub.csv) is also provided to check the functionality of the pipeline using only two of the whole slide images: `tumor_001` (for training) and `tumor_101` (for validation). This dataset should not be used for the real training.
## Run Nsight Profiling

Expand Down

0 comments on commit 2af8c12

Please sign in to comment.