Skip to content

TomArnaez/adversarial-attacks-pytorch

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Adversarial-Attacks-PyTorch

MIT License Pypi Latest Release Documentation Status

Torchattacks is a PyTorch library that provides adversarial attacks to generate adversarial examples. It contains PyTorch-like interface and functions that make it easier for PyTorch users to implement adversarial attacks (README [KOR]).

Easy implementation

import torchattacks
atk = torchattacks.PGD(model, eps=8/255, alpha=2/255, steps=4)
adv_images = atk(images, labels)

Easy modification

from torchattacks.attack import Attack
class CustomAttack(Attack):
    def __init__(self, model):
        super().__init__("CustomAttack", model)

    def forward(self, images, labels=None):
        adv_images = # Custom attack method
        return adv_images

Useful functions

atk.set_mode_targeted_least_likely(kth_min)  # Targeted attack
atk.set_return_type(type='int')  # Return values [0, 255]
atk = torchattacks.MultiAttack([atk1, ..., atk99])  # Combine attacks
atk.save(data_loader, save_path=None, verbose=True, return_verbose=False)  # Save adversarial images

Fast computation

Refer to Performance Comparison.

Table of Contents

  1. Requirements and Installation
  2. Getting Started
  3. Performance Comparison
  4. Citation
  5. Contribution
  6. Recommended Sites and Packages

Requirements and Installation

πŸ“‹ Requirements

  • PyTorch version >=1.4.0
  • Python version >=3.6

πŸ”¨ Installation

pip install torchattacks

Getting Started

⚠️ Precautions

  • All images should be scaled to [0, 1] with transform[to.Tensor()] before used in attacks. To make it easy to use adversarial attacks, a reverse-normalization is not included in the attack process. To apply an input normalization, please add a normalization layer to the model. Please refer to code or nbviewer.
  • All models should return ONLY ONE vector of (N, C) where C = number of classes. Considering most models in torchvision.models return one vector of (N,C), where N is the number of inputs and C is thenumber of classes, torchattacks also only supports limited forms of output. Please check the shape of the model’s output carefully. In the case of the model returns multiple outputs, please refer to the demo.
  • torch.backends.cudnn.deterministic = True to get same adversarial examples with fixed random seed. Some operations are non-deterministic with float tensors on GPU [discuss]. If you want to get same results with same inputs, please run torch.backends.cudnn.deterministic = True[ref].

πŸš€ Demos

Given model, images and labels, adversarial image can be generated as follows:

import torchattacks
atk = torchattacks.PGD(model, eps=8/255, alpha=2/255, steps=4)
adv_images = atk(images, labels)

Torchattacks supports following functions:

Targeted mode

  • Random target label:
# random labels as target labels.
atk.set_mode_targeted_random(n_classses)
  • Least likely label:
# label with the k-th smallest probability used as target labels.
atk.set_mode_targeted_least_likely(kth_min)
  • By custom function:
# label from mapping function
atk.set_mode_targeted_by_function(target_map_function=lambda images, labels:(labels+1)%10)
  • Return to default:
atk.set_mode_default()

Return type

  • Return adversarial images with integer value (0-255).
atk.set_return_type(type='int')
  • Return adversarial images with float value (0-1).
atk.set_return_type(type='float')

Save adversarial images

# Save
atk.save(data_loader, save_path="./data/sample.pt", verbose=True)
  
# Load
import torch
from torch.utils.data import DataLoader, TensorDataset
adv_images, labels = torch.load("./data/sample.pt")
  
# If set_return_type was 'int',
# adv_data = TensorDataset(adv_images.float()/255, labels)
# else,
adv_data = TensorDataset(adv_images, labels)
adv_loader = DataLoader(adv_data, batch_size=128, shuffle=False)

Training/Eval during attack

# For RNN-based models, we cannot calculate gradients with eval mode.
# Thus, it should be changed to the training mode during the attack.
atk.set_training_mode(model_training=False, batchnorm_training=False, dropout_training=False)

Make a set of attacks

  • Strong attacks
atk1 = torchattacks.FGSM(model, eps=8/255)
atk2 = torchattacks.PGD(model, eps=8/255, alpha=2/255, iters=40, random_start=True)
atk = torchattacks.MultiAttack([atk1, atk2])
  • Binary serach for CW
atk1 = torchattacks.CW(model, c=0.1, steps=1000, lr=0.01)
atk2 = torchattacks.CW(model, c=1, steps=1000, lr=0.01)
atk = torchattacks.MultiAttack([atk1, atk2])
  • Random restarts
atk1 = torchattacks.PGD(model, eps=8/255, alpha=2/255, iters=40, random_start=True)
atk2 = torchattacks.PGD(model, eps=8/255, alpha=2/255, iters=40, random_start=True)
atk = torchattacks.MultiAttack([atk1, atk2])

Here are demos of torchattacks.

  • White Box Attack with ImageNet (code, nbviewer): Using torchattacks to make adversarial examples with the ImageNet dataset to fool ResNet-18.
  • Transfer Attack with CIFAR10 (code, nbviewer): This demo provides an example of black box attack with two different models. First, make adversarial datasets from a holdout model with CIFAR10 and save it as torch dataset. Second, use the adversarial datasets to attack a target model.
  • Adversairal Training with MNIST (code, nbviewer): This code shows how to do adversarial training with this repository. The MNIST dataset and a custom model are used in this code. The adversarial training is performed with PGD, and then FGSM is applied to evaluate the model.

Torchattacks also supports collaboration with other attack packages.

FoolBox

https://github.com/bethgelab/foolbox

from torchattacks.attack import Attack
import foolbox as fb

# L2BrendelBethge
class L2BrendelBethge(Attack):
    def __init__(self, model):
        super(L2BrendelBethge, self).__init__("L2BrendelBethge", model)
        self.fmodel = fb.PyTorchModel(self.model, bounds=(0,1), device=self.device)
        self.init_attack = fb.attacks.DatasetAttack()
        self.adversary = fb.attacks.L2BrendelBethgeAttack(init_attack=self.init_attack)
        self._attack_mode = 'only_default'
        
    def forward(self, images, labels):
        images, labels = images.to(self.device), labels.to(self.device)
        
        # DatasetAttack
        batch_size = len(images)
        batches = [(images[:batch_size//2], labels[:batch_size//2]),
                   (images[batch_size//2:], labels[batch_size//2:])]
        self.init_attack.feed(model=self.fmodel, inputs=batches[0][0]) # feed 1st batch of inputs
        self.init_attack.feed(model=self.fmodel, inputs=batches[1][0]) # feed 2nd batch of inputs
        criterion = fb.Misclassification(labels)
        init_advs = self.init_attack.run(self.fmodel, images, criterion)
        
        # L2BrendelBethge
        adv_images = self.adversary.run(self.fmodel, images, labels, starting_points=init_advs)
        return adv_images

atk = L2BrendelBethge(model)
atk.save(data_loader=test_loader, save_path="_temp.pt", verbose=True)

Adversarial-Robustness-Toolbox (ART)

https://github.com/IBM/adversarial-robustness-toolbox

import torch.nn as nn
import torch.optim as optim

from torchattacks.attack import Attack

import art.attacks.evasion as evasion
from art.classifiers import PyTorchClassifier

# SaliencyMapMethod (or Jacobian based saliency map attack)
class JSMA(Attack):
    def __init__(self, model, theta=1/255, gamma=0.15, batch_size=128):
        super(JSMA, self).__init__("JSMA", model)
        self.classifier = PyTorchClassifier(
                            model=self.model, clip_values=(0, 1),
                            loss=nn.CrossEntropyLoss(),
                            optimizer=optim.Adam(self.model.parameters(), lr=0.01),
                            input_shape=(1, 28, 28), nb_classes=10)
        self.adversary = evasion.SaliencyMapMethod(classifier=self.classifier,
                                                   theta=theta, gamma=gamma,
                                                   batch_size=batch_size)
        self.target_map_function = lambda labels: (labels+1)%10
        self._attack_mode = 'only_default'
        
    def forward(self, images, labels):
        adv_images = self.adversary.generate(images, self.target_map_function(labels))
        return torch.tensor(adv_images).to(self.device)

atk = JSMA(model)
atk.save(data_loader=test_loader, save_path="_temp.pt", verbose=True)

πŸ”₯ List of implemented papers

The distance measure in parentheses.

Name Paper Remark
FGSM
(Linf)
Explaining and harnessing adversarial examples (Goodfellow et al., 2014)
BIM
(Linf)
Adversarial Examples in the Physical World (Kurakin et al., 2016) Basic iterative method or Iterative-FSGM
CW
(L2)
Towards Evaluating the Robustness of Neural Networks (Carlini et al., 2016)
RFGSM
(Linf)
Ensemble Adversarial Traning: Attacks and Defences (Tramèr et al., 2017) Random initialization + FGSM
PGD
(Linf)
Towards Deep Learning Models Resistant to Adversarial Attacks (Mardry et al., 2017) Projected Gradient Method
PGDL2
(L2)
Towards Deep Learning Models Resistant to Adversarial Attacks (Mardry et al., 2017) Projected Gradient Method
MIFGSM
(Linf)
Boosting Adversarial Attacks with Momentum (Dong et al., 2017) 😍 Contributor zhuangzi926, huitailangyz
TPGD
(Linf)
Theoretically Principled Trade-off between Robustness and Accuracy (Zhang et al., 2019)
EOTPGD
(Linf)
Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network" (Zimmermann, 2019) EOT+PGD
APGD
(Linf, L2)
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks (Croce et al., 2020)
APGDT
(Linf, L2)
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks (Croce et al., 2020) Targeted APGD
FAB
(Linf, L2, L1)
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack (Croce et al., 2019)
Square
(Linf, L2)
Square Attack: a query-efficient black-box adversarial attack via random search (Andriushchenko et al., 2019)
AutoAttack
(Linf, L2)
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks (Croce et al., 2020) APGD+APGDT+FAB+Square
DeepFool
(L2)
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks (Moosavi-Dezfooli et al., 2016)
OnePixel
(L0)
One pixel attack for fooling deep neural networks (Su et al., 2019)
SparseFool
(L0)
SparseFool: a few pixels make a big difference (Modas et al., 2019)
DIFGSM
(Linf)
Improving Transferability of Adversarial Examples with Input Diversity (Xie et al., 2019) 😍 Contributor taobai
TIFGSM
(Linf)
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks (Dong et al., 2019) 😍 Contributor taobai
Jitter
(Linf)
Exploring Misclassifications of Robust Neural Networks to Enhance Adversarial Attacks (Schwinn, Leo, et al., 2021)
Pixle
(L0)
Pixle: a fast and effective black-box attack based on rearranging pixels (Pomponi, Jary, et al., 2022)

Performance Comparison

For a fair comparison, Robustbench is used. As for the comparison packages, currently updated and the most cited methods were selected:

  • Foolbox: 242 citations and last update 2021.06.
  • ART: 96 citations and last update 2021.06.

Robust accuracy against each attack and elapsed time on the first 50 images of CIFAR10. For L2 attacks, the average L2 distances between adversarial images and the original images are recorded. All experiments were done on GeForce RTX 2080. For the latest version, please refer to here (code, nbviewer).

Attack Package Standard Wong2020Fast Rice2020Overfitting Remark
FGSM (Linf) Torchattacks 34% (54ms) 48% (5ms) 62% (82ms)
Foolbox* 34% (15ms) 48% (8ms) 62% (30ms)
ART 34% (214ms) 48% (59ms) 62% (768ms)
PGD (Linf) Torchattacks 0% (174ms) 44% (52ms) 58% (1348ms) πŸ‘‘ ​Fastest
Foolbox* 0% (354ms) 44% (56ms) 58% (1856ms)
ART 0% (1384 ms) 44% (437ms) 58% (4704ms)
CW† (L2) Torchattacks 0% / 0.40
(2596ms)
14% / 0.61
(3795ms)
22% / 0.56
(43484ms)
πŸ‘‘ ​Highest Success Rate
πŸ‘‘ Fastest
Foolbox* 0% / 0.40
(2668ms)
32% / 0.41
(3928ms)
34% / 0.43
(44418ms)
ART 0% / 0.59
(196738ms)
24% / 0.70
(66067ms)
26% / 0.65
(694972ms)
PGD (L2) Torchattacks 0% / 0.41 (184ms) 68% / 0.5
(52ms)
70% / 0.5
(1377ms)
πŸ‘‘ Fastest
Foolbox* 0% / 0.41 (396ms) 68% / 0.5
(57ms)
70% / 0.5
(1968ms)
ART 0% / 0.40 (1364ms) 68% / 0.5
(429ms)
70% / 0.5
(4777ms)

* Note that Foolbox returns accuracy and adversarial images simultaneously, thus the actual time for generating adversarial images might be shorter than the records.

†Considering that the binary search algorithm for const c can be time-consuming, torchattacks supports MutliAttack for grid searching c.

Citation

If you use this package, please cite the following BibTex (SemanticScholar, GoogleScholar):

@article{kim2020torchattacks,
  title={Torchattacks: A pytorch repository for adversarial attacks},
  author={Kim, Hoki},
  journal={arXiv preprint arXiv:2010.01950},
  year={2020}
}

Contribution

All kind of contributions are always welcome! 😊

If you are interested in adding a new attack to this repo or fixing some issues, please have a look at CONTRIBUTING.md.

Recommended Sites and Packages

Packages

No packages published

Languages

  • Python 100.0%