We develop a system to computationally control the flash light in photographs originally taken with or without flash. We formulate the flash photograph formation through image intrinsics, and estimate the flash shading through generation for no-flash photographs (top) or decomposition where we separate the flash from the ambient illumination for flash photographs (bottom). Paper Link: https://arxiv.org/abs/2306.06089
We provide the implementation of our method for both the generation and decomposition of flash through intrinsics.
Our model can be trained using python 3.9 or higher versions.
Download our models weights from here and put it in the checkpoints.
Install the following dependencies:
conda install pytorch torchvision pytorch-cuda=11.6 -c pytorch -c nvidia
conda install matplotlib
conda install scipy
conda install scikit-image
conda install -c conda-forge pymatreader
conda install -c conda-forge dominate
conda install -c conda-forge timm
pip install opencv-python
Navigate to dataset preparation instructions to download and prepare the training dataset.
For decomposition:
python train.py --dataroot DATASETDIR --name flashDecomposition --model intrinsic_flash_decomposition --normalize_flash 1 --normalize_ambient 1
For generation:
python train.py --dataroot DATASETDIR --name flashGeneration --model intrinsic_flash_generation --normalize_flash 1 --normalize_ambient 1
For decomposition:
python test.py --dataroot DATASETDIR --name flashDecomposition --model intrinsic_flash_decomposition --normalize_flash 1 --normalize_ambient 1 --eval
For generation:
python test.py --dataroot DATASETDIR --name flashGeneration --model intrinsic_flash_generation --normalize_flash 1 --normalize_ambient 1 --eval
This implementation is provided for academic use only.
Please cite our paper if you use this code or any of the models.
The training skeleton is adaptod from the pytorch-CycleGAN-and-pix2pix repository.
The network architecture is adopted from the MiDaS repository.