Large-Scale Facial Expression Recognition Using Dual-Domain Affect Fusion for Noisy Labels

Published: 01 Jan 2023, Last Modified: 15 May 2025CVPR Workshops 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Building models for human facial expression recognition (FER) is made difficult by subjective, ambiguous and noisy annotations. This is especially true when assigning a single emotion class label to facial expressions for large in-the-wild FER datasets. Human facial expressions often contain a mixture of different mental states, which exacerbates the problem of single labels when used to categorize emotions. Dimensional models of affect – such as those using valence and arousal – provide significant advantages over categorical models in terms of representing human emotional states but have remained relatively under-explored. In this paper, we propose an approach for dual-domain affect fusion which investigates the relationships between discrete emotion classes and their continuous representations. In order to address the underlying uncertainty of the labels, we formulate a set of mixed labels via a dual-domain label fusion module to exploit these intrinsic relationships. Finally, we show the benefits of the proposed approach using AffectNet, Aff-Wild, and MorphSet, in the presence of natural and synthetic noise.
Loading