You can check our GTC presentation on YouTube:
Samples and training logs for the model generations can be found here.
This codebase contains an implementation of a deep diffusion model applied to cloud images. It was developed as part of a research project exploring the potential of diffusion models for image generation and forecasting.
- Clone this repository and run
pip install -e .
orpip install cloud_diffusion
- Set up your WandB account by signing up at wandb.ai.
- Set up your WandB API key by running
wandb login
and following the prompts.
To train the model, run python train.py
. You can play with the parameters on top of the file to change the model architecture, training parameters, etc.
You can also override the configuration parameters by passing them as command-line arguments, e.g.
> python train.py --epochs=10 --batch_size=32
This training is based on a Transformer based Unet (UViT), you can train the default model by running:
> python train_uvit.py
If you are only interested on using the trained models, you can run inference by running:
> python inference.py --future_frames 10 --num_random_experiments 2
This will generate 10 future frames for 2 random experiments.
This code is released under the MIT License.