Skip to content

Commit

Permalink
Update readme (#43)
Browse files Browse the repository at this point in the history
Fixes # .

### Description

A few sentences describing the changes proposed in this pull request.

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [x] Non-breaking change (fix or new feature that would not break
existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] New tests added to cover the changes.
- [ ] In-line docstrings updated.

---------

Signed-off-by: heyufan1995 <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
heyufan1995 and pre-commit-ci[bot] authored Oct 2, 2024
1 parent aeff47e commit b98259d
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 13 deletions.
26 changes: 21 additions & 5 deletions vista3d/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,21 @@ Download the [model checkpoint](https://drive.google.com/file/d/1eLIxQwnxGsjggxi

### Inference
The [NIM Demo (VISTA3D NVIDIA Inference Microservices)](https://build.nvidia.com/nvidia/vista-3d) does not support medical data upload due to legal concerns.
We provide scripts for inference locally. The automatic segmentation label definition can be found at [label_dict](./data/jsons/label_dict.json).
1. We provide the `infer.py` script and its light-weight front-end `debugger.py`. User can directly lauch a local interface for both automatic and interactive segmentation.
We provide scripts for inference locally. The automatic segmentation label definition can be found at [label_dict](./data/jsons/label_dict.json). For exact number of supported automatic segmentation class and the reason, please to refer to [issue](https://github.com/Project-MONAI/VISTA/issues/41).

#### MONAI Bundle

For automatic segmentation and batch processing, we highly recommend using the MONAI model zoo. The [MONAI bundle](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d) wraps VISTA3D and provides a unified API for inference, and the [NIM Demo](https://build.nvidia.com/nvidia/vista-3d) deploys the bundle with an interactive front-end. Although NIM Demo cannot run locally, the bundle is available and can run locally. The following command will download the vista3d standalone bundle. The documentation in the bundle contains a detailed explanation for finetuning and inference.

```
pip install "monai[fire]"
python -m monai.bundle download "vista3d" --bundle_dir "bundles/"
```

#### Debugger

We provide the `infer.py` script and its light-weight front-end `debugger.py`. User can directly lauch a local interface for both automatic and interactive segmentation.

```
python -m scripts.debugger run
```
Expand All @@ -91,12 +104,11 @@ To segment everything, run
```
export CUDA_VISIBLE_DEVICES=0; python -m scripts.infer --config_file 'configs/infer.yaml' - infer_everything --image_file 'example-1.nii.gz'
```
The output path and other configs can be changed in the `configs/infer.yaml`.

The output path and other configs can be changed in the `configs/infer.yaml`

2. The [MONAI bundle](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d) wraps VISTA3D and provides a unified API for inference, and the [NIM Demo](https://build.nvidia.com/nvidia/vista-3d) deploys the bundle with an interactive front-end. Although NIM Demo cannot run locally, the bundle is available and can run locally. The running enviroment requires a monai docker. The MONAI bundle is more suitable for automatic segmentattion in batch.
```
docker pull projectmonai/monai:1.3.2
NOTE: `infer.py` does not support `lung`, `kidney`, and `bone` class segmentation while MONAI bundle supports those classes. MONAI bundle uses better memory management and will not easily face OOM issue.
```


Expand Down Expand Up @@ -134,6 +146,10 @@ For finetuning, user need to change `label_set` and `mapped_label_set` in the js
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7;torchrun --nnodes=1 --nproc_per_node=8 -m scripts.train_finetune run --config_file "['configs/finetune/train_finetune_word.yaml']"
```

```
Note: MONAI bundle also provides a unified API for finetuning, but the results in the table and paper are from this research repository.
```

### NEW! [SAM2 Benchmark Tech Report](https://arxiv.org/abs/2408.11210)
We provide scripts to run SAM2 evaluation. Modify SAM2 source code to support background remove: Add `z_slice` to `sam2_video_predictor.py`. Require SAM2 package [installation](https://github.com/facebookresearch/segment-anything-2)
```
Expand Down
8 changes: 0 additions & 8 deletions vista3d/data/jsons/label_dict.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
{
"liver": 1,
"kidney": 2,
"spleen": 3,
"pancreas": 4,
"right kidney": 5,
Expand All @@ -14,12 +13,8 @@
"duodenum": 13,
"left kidney": 14,
"bladder": 15,
"prostate or uterus (deprecated)": 16,
"portal vein and splenic vein": 17,
"rectum (deprecated)": 18,
"small bowel": 19,
"lung": 20,
"bone": 21,
"brain": 22,
"lung tumor": 23,
"pancreatic tumor": 24,
Expand Down Expand Up @@ -127,8 +122,5 @@
"thyroid gland": 126,
"vertebrae S1": 127,
"bone lesion": 128,
"kidney mass (deprecated)": 129,
"liver tumor (deprecated)": 130,
"vertebrae L6 (deprecated)": 131,
"airway": 132
}

0 comments on commit b98259d

Please sign in to comment.