Adversarial attacks impact not just image classification but also object detection tasks. In this project, we've introduced both global perturbation and patch-based adversarial attacks to assess the robustness of object detection models. Our framework seamlessly integrates with the widely-used mmdet library, providing an accessible platform for researchers and developers.
-
Integrated with mmdet
- Compatible with plenty of models from mmdet. Assess their adversarial robustness using the provided config and weight files.
-
Global perturbation attack
- Employ FGSM, BIM, and PGD techniques to test adversarial robustness.
-
Patch-based attack
- Adversarial patches optimized gradient descent.
- Shared patch for objects of the same class.
- Each object receives a central patch.
-
Visualization
- Adversarial images can be saved easily for comparison, analysis, and data augmentation.
-
Distributed training and testing
- Pytorch distributed data-parallel training and testing are supported for faster training and testing.
Dataset
- We evaluate the robustness of detection models on the coco2017 val dataset. Please download coco2017 dataset first. The validation set
val2017
folder and annotations are needed by default. If you want to use your datasets, please convert them to coco-style with the corresponding metainfo.
Detection Model
- Train object detectors using mmdet or directly download the detector weight files and config files provided by mmdet. You'd better use the complete config file generated by mmdet itself.
-
Modify detector config files
- Modify all the
data_root
attributes in detector config files to your correct path, for example,data_root=/path_to_your_datasets/coco2017/
. There are multipledata_root
attributes in a mmdet-style config file. Please make sure that alldata_root
attributes are modified correctly. - Modify the
ann_file
attribute in thetest_evaluator
attribute to your correct path, for example,ann_file= /path_to_your_dataset/coco2017/annotations/instances_val2017.json
. - If you use your datasets, the config files generated in the training process can usually be directly used. Specifically, please make sure that you have provided
metainfo
attribute for the attributedataset
intrain_dataloader
andtest_dataloader
as follows:metainfo = {'classes': ['cls_1', 'cls_2', '...', 'cls_n']} train_dataloader = dict(batch_size=4, num_workers=4, dataset=dict(data_root='/path_to_your_datasets/coco2017/', metainfo=metainfo, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='images') )) test_dataloader = dict(batch_size=4, num_workers=4, dataset=dict(data_root='/path_to_your_datasets/coco2017/', metainfo=metainfo, ann_file='annotations/instances_val2017.json', data_prefix=dict(img='images') ))
- Modify all the
-
Global perturbation attack
- Modify the
detector
attribute in the configs/global_demo.py file according to your detector config file and weight file paths. - Run the following command to start:
Besides, you can also overwrite the
CUDA_VISIBLE_DEVICES=0 python run.py --cfg configs/global_demo.py
detector
attribute in the console and start by the following command:CUDA_VISIBLE_DEVICES=0 python run.py --cfg configs/demo.py --cfg-options detector.cfg_file=/path_to_your_detector_cfg_file detector.weight_file=/path_to_your_detector_weight_file
- For more attack configurations, please refer to configs/global/base.py. Your can overwrite them
in the
global_demo.py
file as you want . Up to now, FGSM, BIM, MIM, TIM, DI_FGSM, SI_NI_FGSM, VMI_FGSM and PGD attack methods are supported for the global perturbation attack.
- Modify the
-
Patch-based attack
- Modify the
detector
attribute in the configs/patch_demo.py file according to your detector config file and weight file paths. - Run the following command to start:
CUDA_VISIBLE_DEVICES=0 python run.py --cfg configs/patch_demo.py
- For more attack configurations, please refer to configs/patch/base.py. You can overwrite them in the
global_demo.py
file as you want.
- Modify the
-
Distributed training and testing
Pytorch distributed dataparallel (DDP) is supported. To start DDP training or testing, please refer to the run_dist.sh for details.
-
Evaluating non-mmdet detectors
If you want to evalute non-mmdet detectors, you may try following steps:
- Convert dataset to coco-style.
- Generate a mmdet-style config file containing a
test_dataloader
,train_dataloader
(if needed), and atest_evaluator
. - Modify your detection model code. Specifically, you are required to add a data_preprocessor, a loss function and a predict function. See
ares.attack.detection.custom.detector.CustomDetector
for details. - Replace
detector = MODELS.build(detector_cfg.model)
in therun.py
file with your detector initialization code.
-
Evaluation of some object detection models
Attack settings: global perturbation attack using PGD with eps=2 under L
$\infty$ normDetector Config Weight IoU Area MaxDets AP (clean) AP (attacked) Faster R-CNN config weight 0.50:0.95 all 100 0.422 0.041 YOLO v3 config weight 0.50:0.95 all 100 0.337 0.062 SSD config weight 0.50:0.95 all 100 0.295 0.039 RetinaNet config weight 0.50:0.95 all 100 0.365 0.027 CenterNet config weight 0.50:0.95 all 100 0.401 0.070 FCOS config weight 0.50:0.95 all 100 0.422 0.045 DETR config weight 0.50:0.95 all 100 0.397 0.074 Deformable DETR config weight 0.50:0.95 all 100 0.469 0.067 DINO config weight 0.50:0.95 all 100 0.570 0.086 YOLOX config weight 0.50:0.95 all 100 0.491 0.098 -
Visualizations
Detector: FCOS
Adversarial image: PGD attack with eps=5/255 under L$\infty$ settingGT bboxes
(clean image)predicted bboxes
(clean image)predicted bboxes
(adversarial image)
Many thanks to these excellent open-source projects: