Skip to content

Commit

Permalink
Update MAISI README (#1798)
Browse files Browse the repository at this point in the history
### Description
Update MAISI README.

### Checks
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [ ] Avoid including large-size files in the PR.
- [ ] Clean up long text outputs from code cells in the notebook.
- [ ] For security purposes, please check the contents and remove any
sensitive info such as user names and private key.
- [ ] Ensure (1) hyperlinks and markdown anchors are working (2) use
relative paths for tutorial repo files (3) put figure and graphs in the
`./figure` folder
- [ ] Notebook runs automatically `./runner.sh -t <path to .ipynb file>`

---------

Signed-off-by: dongy <[email protected]>
Co-authored-by: dongy <[email protected]>
  • Loading branch information
dongyang0122 and dongy authored Aug 26, 2024
1 parent 2ec36f2 commit 29c4334
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions generation/maisi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,8 @@ The information for the inference input, like body region and anatomy to generat
- `"autoencoder_sliding_window_infer_size"`: in order to save GPU memory, we use sliding window inference when decoding latents to image when `"output_size"` is large. This is the patch size of the sliding window. Small value will reduce GPU memory but increase time cost. They need to be divisible by 16.
- `"autoencoder_sliding_window_infer_overlap"`: float between 0 and 1. Large value will reduce the stitching artifacts when stitching patches during sliding window inference, but increase time cost. If you do not observe seam lines in the generated image result, you can use a smaller value to save inference time.

To generate images with substantial dimensions, such as 512 &times; 512 &times; 512 or larger, using GPUs with 80GB of memory, it is advisable to configure the `"num_splits"` parameter in [the auto-encoder configuration](./configs/config_maisi.json#L11-L37) to 16. This adjustment is crucial to avoid out-of-memory issues during inference.

#### Execute Inference:
To run the inference script, please run:
```bash
Expand Down

0 comments on commit 29c4334

Please sign in to comment.