This is the codebase for the paper [P. Du, MH. Parikh, X. Fan, X.-Y. Liu, J.-X. Wang. Conditional neural field latent diffusion model for generating spatiotemporal turbulence. Nature Communications 15, 10416 (2024)] https://doi.org/10.1038/s41467-024-54712-1
create a conda environment named "CoNFiLD"
- install
conda
managed packagesconda env create -f env.yml
- change
conda
environmentconda activate CoNFiLD
- install
pip
managed packagespip install -r requirements_pip.txt
-
Create a
.env
file in the CoNFiLD directory and copy the following settings within the filePYTHONPATH=./:UnconditionalDiffusionTraining_and_Generation:ConditionalNeuralField:$PYTHONPATH CUDA_VISIBLE_DEVICES= #set your GPU number(s) here
-
Run the following bash command
set -o allexport && source .env && set +o allexport
- The trained model parameters associated with this code can be downloaded here
- To generate unconditional samples, please run the
UnconditionalDiffusionTraining_and_Generation/scripts/inference.py
scriptpython UnconditionalDiffusionTraining_and_Generation/scripts/inference.py PATH/TO/YOUR/xxx.yaml
- Please refer to yaml files (particulary inference specific args) under
UnconditionalDiffusionTraining_and_Generation/training_recipes
for reproducing the paper's results
-
Here we provide the conditional generation script for Case4 random sensors case
- For creating your arbitrary conditioning, please define your forward function in
ConditionalDiffusionGeneration/src/guided_diffusion/measurements.py
- For creating your arbitrary conditioning, please define your forward function in
-
To understand the conditional generation process, please follow the instructions in the Jupyter Notebook
ConditionalDiffusionGeneration/inference_scripts/Case4/random_sensor/inference_phy_random_sensor.ipynb
-
For running the Jupyter Notebook, you will need a
input
directory at the same path.- For Case 4 random sensors, a reference input directory has been constructed. The path to it is :
ConditionalDiffusionGeneration/inference_scripts/Case4/random_sensor/input
- The file structure of the
input
directory should be as follows
| input | |-- cnf_model # Files for CNF decoding | | | |-- coords.npy # query coordinates | | | |-- infos.npz # geometry mask data | | | |-- checkpoint.pt # trainable parameters for the CNF part | | | |-- normalizer.pt # normalization parameters for CNF | |-- data scale # Min-Max for denormalizing the latents | | | |-- data_max.npy | | | |-- data_min.npy | |-- diff_model # Files for diffusion model | | | |-- ema_model.pt # trainable parameters for the diffusion part | |-- random_sensor # sensor measurements | | | |-- number of sensors | |-- ...
- For Case 4 random sensors, a reference input directory has been constructed. The path to it is :
- The data associated with this code can be downloaded here
-
Use
train.py
underConditionalNeuralField/scripts
directorypython ConditionalNeuralField/scripts/train.py PATH/TO/YOUR/xxx.yaml
-
To reproduce the results form the paper, download and add the corresponding case data in the
ConditionalNeuralField/data
directory use theConditionalNeuralField/training_recipes/case{1,2,3,4}.yml
- The
ConditionalNeuralField/data
directory should be populated as followsdata # all the input files for CNF | |-- data.npy # data to fit | |-- coords.npy # query coordinates
- The
- After the CNF is trained:
- Process the latents into square images with dimensions of the square equal to the latent vector length
- Add a channel dimension after the batch dimension. The final shape should be
$(B: 1: H: W)$
- Use
train.py
underUnconditionalDiffusionTraining_and_Generation/scripts
directorypython UnconditionalDiffusionTraining_and_Generation/scripts/train.py PATH/TO/YOUR/xxx.yaml
- To reproduce the results from the paper, download and add the corresponding case data in the
UnconditionalDiffusionTraining_and_Generation/data
-
Modify the
train_data_path
andvalid_data_path
inUnconditionalDiffusionTraining_and_Generation/training_recipes/case{1,2,3,4}.yml
-
The
UnconditionalDiffusionTraining_and_Generation/data
directory should be populated as followsdata # all the input files for diffusion model | |-- train_data.npy # training data | |-- valid_data.npy # validation data
-
- If you have an issue in running the code please raise an issue
If you find our work useful and relevant to your research, please cite:
@article{du2024confild,
title={CoNFiLD: Conditional Neural Field Latent Diffusion Model Generating Spatiotemporal Turbulence},
author={Du, Pan and Parikh, Meet Hemant and Fan, Xiantao and Liu, Xin-Yang and Wang, Jian-Xun},
journal={arXiv preprint arXiv:2403.05940},
year={2024}
}
The diffusion model used in this work is based on OpenAI's implementation. The DPS part is based on Diffusion Posterior Sampling for General Noisy Inverse Problems