Skip to content
/ adass Public

Attention-based Domain Adaptation for Single Stage Detectors

Notifications You must be signed in to change notification settings

vidit09/adass

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Pytorch implementation for Attention-based Domain Adaptation for Single Stage Detectors.

Getting Started

This repositry is based on https://github.com/lufficc/SSD implementation of SSD. Please follow this repo for installing the requirements and train/test procedure. Our code was run with following versions

  1. Pytorch == 1.6

  2. Python >=3.6

Dataset

For this work, we follow the same dataset setup as EveryPixelMatters.

Modify the path_catalogs file in order to point to specific dataset location.

Training

We train on a single NVIDIA V100 GPU.

python train.py --config-file configs/<adaptation_task>.yaml

Attention Module

Our attention head implementation follows the Detr's implementation and used in the domain classifier. The same arch design is followed in YOLO implementation.

Citation

@article{vidit2022attention,
  title={Attention-based domain adaptation for single-stage detectors},
  author={Vidit, Vidit and Salzmann, Mathieu},
  journal={Machine Vision and Applications},
  volume={33},
  number={5},
  pages={1--14},
  year={2022},
  publisher={Springer}
}

About

Attention-based Domain Adaptation for Single Stage Detectors

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published