PyTorch training code and pretrained models for DACL (Deep Attentive Center Loss). We propose an attention network to adaptively select a subset of significant feature elements for enhanced facial expression discrimination. The attention network estimates attention weights across all feature dimensions to accommodate the sparse formulation of center loss to selectively achieve intra-class compactness and inter-class separation for the relevant information in the embedding space.
DACL is highly customizable and can be adapted to other problems in computer vision. For more details see Facial Expression Recognition in the Wild via Deep Attentive Center Loss by Amir Hossein Farzaneh and Xiaojun Qi (WACV2021).
We provide the trained DACL model and other baseline models with softmax loss and center loss in the future.
method | backbone | affectnet acc. | rafdb acc. | url | logs | size | |
---|---|---|---|---|---|---|
0 | DACL | resnet18 | 65.20 % | 87.78 % | soon | NA |
1 | center loss | resnet18 | 64.09 % | 87.06 % | soon | NA |
2 | softmax loss | resnet18 | 63.86 % | 86.54 % | soon | NA |
- Install the required dependencies:
- torch == 1.5+
- torchvision == 0.6+
- scikit-learn
- tqdm
- Download ms-celeb pretrained model for weight initialization: Google Drive
- clone this repository:
git clone https://github.com/amirhfarzaneh/dacl
path/to/fer/dataset/
train/ # directory containing training images
00/ # subdirectory containing images from class 0 (neutral)
01/ # subdirectory containing images from class 1 (happy)
02/ # subdirectory containing images from class 2 (sad)
...
06/ # subdirectory containing images from class 6 (disgust)
valid/ # directory containing validation images
00/ # subdirectory containing images from class 0 (neutral)
01/ # subdirectory containing images from class 1 (happy)
02/ # subdirectory containing images from class 2 (sad)
...
06/ # subdirectory containing images from class 6 (disgust)
- modify the dataset
root_dir
inworkspace.py
at line 14
To train DACL initialized with msceleb weights on a single GPU for 10 epochs run:
python main.py --arch=resnet18 --lr=[LR] --wd=[WD] --bs=[BATCH-SIZE] --epochs=10 --alpha=[ALPHA] --lamb=[LAMBDA]
If you use this code in your project or research, please cite using the following bibtex:
@InProceedings{Farzaneh_2021_WACV,
author = {Farzaneh, Amir Hossein and Qi, Xiaojun},
title = {Facial Expression Recognition in the Wild via Deep Attentive Center Loss},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2021},
pages = {2402-2411}
}