3D GANs have the ability to generate latent codes for entire 3D volumes rather than only 2D images. These models offer desirable features like high-quality geometry and multi-view consistency, but, unlike their 2D counterparts, complex semantic image editing tasks for 3D GANs have only been partially explored. To address this problem, we propose LatentSwap3D, a semantic edit approach based on latent space discovery that can be used with any off-the-shelf 3D or 2D GAN model and on any dataset. LatentSwap3D relies on identifying the latent code dimensions corresponding to specific attributes by feature ranking using a random forest classifier. It then performs the edit by swapping the selected dimensions of the image being edited with the ones from an automatically selected reference image. Compared to other latent space control-based edit methods, which were mainly designed for 2D GANs, our method on 3D GANs provides remarkably consistent semantic edits in a disentangled manner and outperforms others both qualitatively and quantitatively. We show results on seven 3D GANs (pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D, StyleNeRF, and VolumeGAN) and on five datasets (FFHQ, AFHQ, Cats, MetFaces, and CompCars).
$ git clone --recurse-submodules -j8 [email protected]:enisimsar/latentswap3d.git
Install the dependencies in env.yml
$ conda env create -f env.yml
$ conda activate latent-3d
For a quick demo, see DEMO.
- FFHQ Random Face Example: Edit In Colab
- FFHQ Real Face Example: Edit Real Face In Colab
The repository uses Hydra framework to manage experiments. We provide seven main experiments:
gen.py
: Generates the dataset used to find feature ranking.predict.py
: Predicts the pseudo-labels of the generated dataset.dci.py
: Calculates the DCI metrics for the candidate latent spaces.find.py
: Finds the feature ranking for attributes.tune.py
: Tunes the parameter K, number of dimensions, that will be swapped.manipulate.py
: Applies the semantic edits on random samples.encode.py
: Encodes the real face image.
Hydra will output experiment results under outputs
folder.
python gen.py hparams.batch_size=1 num_samples=10000 generator=mvcgan generator.class_name=FFHQ
OUTPUT_PATH=outputs/run/src.generators.MVCGANGenerator/FFHQ/2022-11-23
python predict.py hparams.batch_size=50 load_path=$OUTPUT_PATH generator=mvcgan generator.class_name=FFHQ
python dci.py load_path=$OUTPUT_PATH generator=mvcgan generator.class_name=FFHQ
python find.py load_path=$OUTPUT_PATH generator=mvcgan generator.class_name=FFHQ
python tune.py load_path=$OUTPUT_PATH generator=mvcgan generator.class_name=FFHQ
python manipulate.py load_path=$OUTPUT_PATH generator=mvcgan generator.class_name=FFHQ
If you use this code for your research, please cite our paper:
@InProceedings{Simsar_2023_ICCV,
author = {Simsar, Enis and Tonioni, Alessio and Ornek, Evin Pinar and Tombari, Federico},
title = {LatentSwap3D: Semantic Edits on 3D Image GANs},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {October},
year = {2023},
pages = {2899-2909}
}