Deep Learning for Facial Action Unit Detection Under Large Head PosesOpen Website

2016 (modified: 11 Nov 2022)ECCV Workshops (3) 2016Readers: Everyone
Abstract: Facial expression communicates emotion, intention, and physical state, and regulates interpersonal behavior. Automated face analysis (AFA) for the detection, synthesis, and understanding of facial expression is a vital focus of basic research with applications in behavioral science, mental and physical health and treatment, marketing, and human-robot interaction among other domains. In previous work, facial action unit (AU) detection becomes seriously degraded when head orientation exceeds $$15^{\circ }$$ to $$20^{\circ }$$ . To achieve reliable AU detection over a wider range of head pose, we used 3D information to augment video data and a deep learning approach to feature selection and AU detection. Source video were from the BP4D database (n = 41) and the FERA test set of BP4D-extended (n = 20). Both consist of naturally occurring facial expression in response to a variety of emotion inductions. In augmented video, pose ranged between $$-18^{\circ }$$ and $$90^{\circ }$$ for yaw and between $$-54^{\circ }$$ and $$54^{\circ }$$ for pitch angles. Obtained results for action unit detection exceeded state-of-the-art, with as much as a 10 % increase in $$F_1$$ measures.
0 Replies

Loading