In face modality, it involves detecting affect from facial expressions using both static and dyanmic sources, i.e image and videos.
CK+ comprises of posed facial expressions obtained from 123 adults in laboratory conditions. A total of 593 image sequences with duration varying from 10 to 60 frames were obtained with a subset of 327 image sequences was labelled with 7 discrete affect states.
Sadness, Surprise, Happy, Fear, Angry, Contempt and Disgust.
Paper | Year | Metric | Code Link |
---|---|---|---|
AffectNet comprises of in-the-wild face images obtained from the web. A total of ~440,000 images were labelled with 7 discrete affect states and 2 continous affect ratings.
Discrete - Sad, Surprise, Happy, Fear, Angry, Contempt, Disgust.
Continous - Valence and Arousal.
Paper | Year | Metric | Code Link |
---|---|---|---|
Paper | Year | Metric | Code Link |
---|---|---|---|
FER-2013 was obtained using a keyword-based Google image search and comprises of 35887 images labelled with 7 discrete affect states.
Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral
Paper | Year | Metric | Code Link |
---|---|---|---|
Paper | Year | Metric | Code Link |
---|---|---|---|