Affective Behaviour Analysis Using Pretrained Model with Facial PriorOpen Website

Published: 2022, Last Modified: 15 Nov 2023ECCV Workshops (6) 2022Readers: Everyone
Abstract: Affective behavior analysis has aroused researchers’ attention due to its broad applications. However, it is labor exhaustive to obtain accurate annotations for massive face images. Thus, we propose to utilize the prior facial information via Masked Auto-Encoder (MAE) pretrained on unlabeled face images. Furthermore, we combine MAE pretrained Vision Transformer (ViT) and AffectNet pretrained CNN to perform multi-task emotion recognition. We notice that expression and action unit (AU) scores are pure and intact features for valence-arousal (VA) regression. As a result, we utilize AffectNet pretrained CNN to extract expression scores concatenating with expression and AU scores from ViT to obtain the final VA features. Moreover, we also propose a co-training framework with two parallel MAE pretrained ViTs for expression recognition tasks. In order to make the two views independent, we randomly mask most patches during the training process. Then, JS divergence is performed to make the predictions of the two views as consistent as possible. The results on ABAW4 show that our methods are effective, and our team reached 2nd place in the multi-task learning (MTL) challenge and 4th place in the learning from synthetic data (LSD) challenge. Code is available $$^{3}$$ https://github.com/JackYFL/EMMA_CoTEX_ABAW4 .
0 Replies

Loading