Skip to content

πŸ›‘ A curated list of adversarial attacks in PyTorch, with a focus on transferable black-box attacks.

License

Notifications You must be signed in to change notification settings

spencerwooo/torchattack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation


torchattack banner


Ruff pypi python versions pypi version pypi weekly downloads lint

πŸ›‘ torchattack - A curated list of adversarial attacks in PyTorch, with a focus on transferable black-box attacks.

pip install torchattack

Highlights

  • πŸ›‘οΈ A curated collection of adversarial attacks implemented in PyTorch.
  • πŸ” Focuses on gradient-based transferable black-box attacks.
  • πŸ“¦ Easily load pretrained models from torchvision or timm using AttackModel.
  • πŸ”„ Simple interface to initialize attacks with create_attack.
  • πŸ”§ Extensively typed for better code quality and safety.
  • πŸ“Š Tooling for fooling rate metrics and model evaluation in eval.
  • πŸ” Numerous attacks reimplemented for readability and efficiency (TGR, VDC, etc.).

Documentation

torchattack's docs are available at docs.swo.moe/torchattack.

Usage

import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

Load a pretrained model to attack from either torchvision or timm.

from torchattack import AttackModel

# Load a model with `AttackModel`
model = AttackModel.from_pretrained(model_name='resnet50', device=device)
# `AttackModel` automatically attach the model's `transform` and `normalize` functions
transform, normalize = model.transform, model.normalize

# Additionally, to explicitly specify where to load the pretrained model from (timm or torchvision),
# prepend the model name with 'timm/' or 'tv/' respectively, or use the `from_timm` argument, e.g.
vit_b16 = AttackModel.from_pretrained(model_name='timm/vit_base_patch16_224', device=device)
inv_v3 = AttackModel.from_pretrained(model_name='tv/inception_v3', device=device)
pit_b = AttackModel.from_pretrained(model_name='pit_b_224', device=device, from_timm=True)

Initialize an attack by importing its attack class.

from torchattack import FGSM, MIFGSM

# Initialize an attack
attack = FGSM(model, normalize, device)

# Initialize an attack with extra params
attack = MIFGSM(model, normalize, device, eps=0.03, steps=10, decay=1.0)

Initialize an attack by its name with create_attack().

from torchattack import create_attack

# Initialize FGSM attack with create_attack
attack = create_attack('FGSM', model, normalize, device)

# Initialize PGD attack with specific eps with create_attack
attack = create_attack('PGD', model, normalize, device, eps=0.03)

# Initialize MI-FGSM attack with extra args with create_attack
attack_args = {'steps': 10, 'decay': 1.0}
attack = create_attack('MIFGSM', model, normalize, device, eps=0.03, **attack_args)

Check out examples/ and torchattack.eval.runner for full examples.

Attacks

Name Class Name Publication Paper (Open Access)
Gradient-based attacks
FGSM FGSM ICLR 2015 Explaining and Harnessing Adversarial Examples
PGD PGD ICLR 2018 Towards Deep Learning Models Resistant to Adversarial Attacks
PGD (L2) PGDL2 ICLR 2018 Towards Deep Learning Models Resistant to Adversarial Attacks
MI-FGSM MIFGSM CVPR 2018 Boosting Adversarial Attacks with Momentum
DI-FGSM DIFGSM CVPR 2019 Improving Transferability of Adversarial Examples with Input Diversity
TI-FGSM TIFGSM CVPR 2019 Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks
NI-FGSM NIFGSM ICLR 2020 Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
SI-NI-FGSM SINIFGSM ICLR 2020 Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
DR DR CVPR 2020 Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction
VMI-FGSM VMIFGSM CVPR 2021 Enhancing the Transferability of Adversarial Attacks through Variance Tuning
VNI-FGSM VNIFGSM CVPR 2021 Enhancing the Transferability of Adversarial Attacks through Variance Tuning
Admix Admix ICCV 2021 Admix: Enhancing the Transferability of Adversarial Attacks
FIA FIA ICCV 2021 Feature Importance-aware Transferable Adversarial Attacks
PNA-PatchOut PNAPatchOut AAAI 2022 Towards Transferable Adversarial Attacks on Vision Transformers
NAA NAA CVPR 2022 Improving Adversarial Transferability via Neuron Attribution-Based Attacks
SSA SSA ECCV 2022 Frequency Domain Model Augmentation for Adversarial Attack
TGR TGR CVPR 2023 Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization
ILPD ILPD NeurIPS 2023 Improving Adversarial Transferability via Intermediate-level Perturbation Decay
DeCoWA DeCoWA AAAI 2024 Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping
VDC VDC AAAI 2024 Improving the Adversarial Transferability of Vision Transformers with Virtual Dense Connection
ATT ATT NeurIPS 2024 Boosting the Transferability of Adversarial Attack on Vision Transformer with Adaptive Token Tuning
Generative attacks
CDA CDA NeurIPS 2019 Cross-Domain Transferability of Adversarial Perturbations
LTP LTP NeurIPS 2021 Learning Transferable Adversarial Perturbations
BIA BIA ICLR 2022 Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains
GAMA GAMA NeurIPS 2022 GAMA: Generative Adversarial Multi-Object Scene Attacks
Others
DeepFool DeepFool CVPR 2016 DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
GeoDA GeoDA CVPR 2020 GeoDA: A Geometric Framework for Black-box Adversarial Attacks
SSP SSP CVPR 2020 A Self-supervised Approach for Adversarial Robustness

Development

On how to install dependencies, run tests, and build documentation. See Development - torchattack.

License

MIT

Related

About

πŸ›‘ A curated list of adversarial attacks in PyTorch, with a focus on transferable black-box attacks.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages