Skip to content

Code for "Adversarial attack by dropping information." (ICCV 2021)

License

Notifications You must be signed in to change notification settings

HashmatShadab/AdvDrop

 
 

Repository files navigation

AdvDrop

Code for "AdvDrop: Adversarial Attack to DNNs by Dropping Information(ICCV 2021)."

Human can easily recognize visual objects with lost information: even losing most details with only contour reserved, e.g. cartoon. However, in terms of visual perception of Deep Neural Networks (DNNs), the ability for recognizing abstract objects (visual item with lost information) is still a challenge. In this work, we investigate this issue from an adversarial viewpoint: will the performance of DNNs decrease even for the images only losing a little information? Towards this end, we propose a novel adversarial attack, named AdvDrop, which crafts adversarial examples by dropping existing information of images. Previously, most adversarial attacks add extra disturbing information on clean images explicitly. Opposite to previous works, our proposed work explores the adversarial robustness of DNN models in a novel perspective by dropping imperceptible details to craft adversarial examples.

[Paper link](http://arxiv.org/abs/2108.09034)

Installation

We highly recommend using conda.

conda create -n advdrop_env python=3.7
source activate advdrop_env

After activating the virtual environment, install pytorch, numpy and torchattacks with:

pip install --user [the package name]

Download dataset.

Usage

Quick Start

python infod_sample_orig.py
  • Parameters can be speicified in infod_sample.py.

An example

Adv Images

Acknowledgments

Citation

CHnages need to be made

About

Code for "Adversarial attack by dropping information." (ICCV 2021)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 59.1%
  • Jupyter Notebook 40.9%