TAFIM: Targeted Adversarial Attacks for Facial Forgery Detection
Official PyTorch implementation of ECCV 2022 paper
Targeted Adversarial Attacks for Facial Forgery Detection
Shivangi Aneja, Lev Markhasin, Matthias Niessner
https://shivangi-aneja.github.io/projects/tafim
Abstract: Face manipulation methods can be misused to affect an individual’s privacy or to spread disinformation. To this end, we introduce a novel data-driven approach that produces image-specific perturbations which are embedded in the original images. The key idea is that these protected images prevent face manipulation by causing the manipulation model to produce a predefined manipulation target (uniformly colored output image in our case) instead of the actual manipulation. In addition, we propose to leverage differentiable compression approximation, hence making generated perturbations robust to common image compression. In order to prevent against multiple manipulation methods simultaneously, we further propose a novel attention-based fusion of manipulation-specific perturbations. Compared to traditional adversarial attacks that optimize noise patterns for each image individually, our generalized model only needs a single forward pass, thus running orders of magnitude faster and allowing for easy integration in image processing stacks, even on resource-constrained devices like smartphones.
- Linux
- NVIDIA GPU + CUDA CuDNN
- Python 3.X
- Dependencies:
It is recommended to install all dependecies usingpip
The dependencies for defining the environment are provided inrequirements.txt
.
Please download these models, as they will be required for experiments.
Path | Description |
---|---|
pSp Encoder | pSp trained with the FFHQ dataset for StyleGAN inversion. |
StyleClip | StyleClip trained with the FFHQ dataset for text-manipulation (Afro, Angry, Beyonce, BobCut, BowlCut, Curly Hair, Mohawk, Purple Hair, Surprised, Taylor Swift, Trump, zuckerberg ) |
SimSwap | SinSwap trained for face-swapping |
SAM | SAM model trained for age transformation (used in supp. material). |
StyleGAN-NADA | StyleGAN-Nada models (used in supp. material). |
The code is well-documented and should be easy to follow.
- Source Code:
$ git clone
this repo and install the Python dependencies fromrequirements.txt
. The source code is implemented in PyTorch so familarity with PyTorch is expected. - Dataset: We used FFHQ dataset for our experiments. This is publicly available here. We divide FFHQ dataset into : Training (5000 images), Validation (1000 images) and Test (1000 images). We additionally used Celeb-HQ and VGGFace2-HQ dataset for some additional experiments. These datasets can be download from the respective websites. All images are resized to 256 X 256 during transform.
- Manipulation Methods: We examine our method primarily on three popular manipulations: (1) Style-Mixing using pSp (2) Face-Swapping using SimSwap and (3) Textual Editing using StyleClip. This code heavily borrows from these repositories for implementation of these manipulations. Please setup and install dependencies for these methods from their original implementation. The scripts to check whether image manipulation model works can be found in
manipulation_tests/
directory. Make sure that these scripts work and you are able to perform inference on these models. - Path Configuration: Configure the following paths before training.
- Refer to
configs/paths_config.py
to define the necessary data paths and model paths for training and evaluation. - Refer to
configs/transforms_config.py
for the transforms defined for each dataset/experiment. - Refer to
configs/common_config.py
and change thearchitecture_type
anddataset_type
according to the experiment you wish to perform. - Finally, refer to
configs/data_configs.py
for the source/target data paths for the train and test sets as well as the transforms. - If you wish to experiment with your own dataset, you can simply make the necessary adjustments in
data_configs.py
to define your data paths.transforms_configs.py
to define your own data transforms.
- Refer to
configs/attack_configs.py
and change thenet_noise
to change the protection model architecture.
- Refer to
- Training: The main training script to train the protection models for different configurations are directly available in directory
trainer_scripts
. To train the protection models, depending on the manipulation method execute the following commands
# For self-reconstruction/style-mixing task
python -m trainer_scripts.train_protection_model_pSp
# For face-swapping task
python -m trainer_scripts.train_protection_model_simswap
# For textual editing task
python -m trainer_scripts.train_protection_model_styleclip
# For protection against Jpeg Compression
python -m trainer_scripts.train_protection_model_pSp_jpeg
# For combining perturbations from multiple manipulation methods
python -m trainer_scripts.train_protection_model_all_attention
- Evaluation: Once training is complete, then to evaluate, specify the path to protection model and evaulate. For instance, to evaluate for the self reconstruction task for pSp encoder, excecute:
python -m testing_scripts.test_protection_model_pSp -p protection_model.pth
If you find our dataset or paper useful for your research , please include the following citation:
@InProceedings{aneja2022tafim,
author="Aneja, Shivangi and Markhasin, Lev and Nie{\ss}ner, Matthias",
title="TAFIM: Targeted Adversarial Attacks Against Facial Image Manipulations",
booktitle="Computer Vision -- ECCV 2022",
year="2022",
publisher="Springer Nature Switzerland",
address="Cham",
pages="58--75",
isbn="978-3-031-19781-9"
}
Contact Us
If you have questions regarding the dataset or code, please email us at [email protected]. We will get back to you as soon as possible.