Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove installation section in ldm2d and ldm3d readme #1821

Merged
merged 5 commits into from
Sep 10, 2024
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 5 additions & 8 deletions generation/2d_ldm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,12 +26,9 @@ python download_brats_data.py -e ./config/environment.json

Disclaimer: We are not the host of the data. Please make sure to read the requirements and usage policies of the data and give credit to the authors of the dataset!

### 2. Installation
Please refer to the [Installation of MONAI Generative Model](../README.md)
### 2. Run the example

### 3. Run the example

#### [3.1 2D Autoencoder Training](./train_autoencoder.py)
#### [2.1 2D Autoencoder Training](./train_autoencoder.py)

The network configuration files are located in [./config/config_train_32g.json](./config/config_train_32g.json) for 32G GPU and [./config/config_train_16g.json](./config/config_train_16g.json) for 16G GPU. You can modify the hyperparameters in these files to suit your requirements.

Expand Down Expand Up @@ -74,7 +71,7 @@ An example reconstruction result is shown below:
<img src="./figs/recon.png" alt="Autoencoder reconstruction result")
</p>

#### [3.2 2D Latent Diffusion Training](./train_diffusion.py)
#### [2.2 2D Latent Diffusion Training](./train_diffusion.py)
The training script uses the batch size and patch size defined in the configuration files. If you have a different GPU memory size, you should adjust the `"batch_size"` and `"patch_size"` parameters in the `"diffusion_train"` to match your GPU. Note that the `"patch_size"` needs to be divisible by 16 and no larger than 256.

To train with single 32G GPU, please run:
Expand All @@ -97,7 +94,7 @@ torchrun \
<img src="./figs/val_diffusion.png" alt="latent diffusion validation curve" width="45%" >
</p>

#### [3.3 Inference](./inference.py)
#### [2.3 Inference](./inference.py)
To generate one image during inference, please run the following command:
```bash
python inference.py -c ./config/config_train_32g.json -e ./config/environment.json --num 1
Expand All @@ -115,7 +112,7 @@ An example output is shown below.
<img src="./figs/syn_3.jpeg" width="20%" >
</p>

### 4. Questions and bugs
### 3. Questions and bugs

- For questions relating to the use of MONAI, please use our [Discussions tab](https://github.com/Project-MONAI/MONAI/discussions) on the main repository of MONAI.
- For bugs relating to MONAI functionality, please create an issue on the [main repository](https://github.com/Project-MONAI/MONAI/issues).
Expand Down
13 changes: 5 additions & 8 deletions generation/3d_ldm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,12 +26,9 @@ python download_brats_data.py -e ./config/environment.json

Disclaimer: We are not the host of the data. Please make sure to read the requirements and usage policies of the data and give credit to the authors of the dataset!

### 2. Installation
Please refer to the [Installation of MONAI Generative Model](../README.md)
### 2. Run the example

### 3. Run the example

#### [3.1 3D Autoencoder Training](./train_autoencoder.py)
#### [2.1 3D Autoencoder Training](./train_autoencoder.py)

The network configuration files are located in [./config/config_train_32g.json](./config/config_train_32g.json) for 32G GPU
and [./config/config_train_16g.json](./config/config_train_16g.json) for 16G GPU.
Expand Down Expand Up @@ -73,7 +70,7 @@ torchrun \

With eight DGX1V 32G GPUs, it took around 55 hours to train 1000 epochs.

#### [3.2 3D Latent Diffusion Training](./train_diffusion.py)
#### [2.2 3D Latent Diffusion Training](./train_diffusion.py)
The training script uses the batch size and patch size defined in the configuration files. If you have a different GPU memory size, you should adjust the `"batch_size"` and `"patch_size"` parameters in the `"diffusion_train"` to match your GPU. Note that the `"patch_size"` needs to be divisible by 16.

To train with single 32G GPU, please run:
Expand All @@ -96,7 +93,7 @@ torchrun \
<img src="./figs/val_diffusion.png" alt="latent diffusion validation curve" width="45%" >
</p>

#### [3.3 Inference](./inference.py)
#### [2.3 Inference](./inference.py)
To generate one image during inference, please run the following command:
```bash
python inference.py -c ./config/config_train_32g.json -e ./config/environment.json --num 1
Expand All @@ -112,7 +109,7 @@ An example output is shown below.
<img src="./figs/syn_cor.png" width="30%" >
</p>

### 4. Questions and bugs
### 3. Questions and bugs

- For questions relating to the use of MONAI, please use our [Discussions tab](https://github.com/Project-MONAI/MONAI/discussions) on the main repository of MONAI.
- For bugs relating to MONAI functionality, please create an issue on the [main repository](https://github.com/Project-MONAI/MONAI/issues).
Expand Down
2 changes: 1 addition & 1 deletion modules/developer_guide.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -717,7 +717,7 @@
"id": "kvn_6mf9gZoA"
},
"source": [
"The following commands will start a `SupervisedTrainer` instance. As an extension of Pytorch ignite's facilities, it combines all the elements mentioned before. Calling `trainer.run()` will train the network for two epochs and compute `MeadDice` metric based on the training data at the end of every epoch.\n",
"The following commands will start a `SupervisedTrainer` instance. As an extension of Pytorch ignite's facilities, it combines all the elements mentioned before. Calling `trainer.run()` will train the network for two epochs and compute `MeanDice` metric based on the training data at the end of every epoch.\n",
"\n",
"The `key_train_metric` is used to track the progress of model quality improvement. Additional handlers could be set to do early stopping and learning rate scheduling.\n",
"\n",
Expand Down
Loading