Adversarial regularization is a powerful technique for mitigating learned biases. This repo contains code for an ongoing project applying adversarial regularization to VQA.
This repo is forked from Pythia. Pythia is a modular framework for Visual Question Answering research, which formed the basis for the winning entry to the VQA Challenge 2018 from Facebook AI Research (FAIR)’s A-STAR team. See their paper for more details.
This repo is a fork of Pythia, so can use the commands provided in the Pythia README to train and evaluate the model. It may also be helpful to consult the original version of the Pythia README from the time when this code was forked.
The VQA-CP dataset is available at https://www.cc.gatech.edu/~aagrawal307/vqa-cp/. You can use this script to download VQA-CP v2. The make_trainval_split.py
script will generate the train / valid / test split used in our paper.
Instructions for preprocessing can be found at https://github.com/gabegrand/adversarial-vqa/blob/master/data_prep/data_preprocess.md.