Learning deep representation for action unit detection with auxiliary facial attributesDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 05 Nov 2023Int. J. Mach. Learn. Cybern. 2022Readers: Everyone
Abstract: Action unit (AU) occurrence detection refers to the recognition of the presence or absence of the AU, and it is a challenging task due to rigid and non-rigid facial motion, subtle facial changes, and short duration. Recently, most studies focus on automatic AU detection by local representation learning or exploiting the correlation of AU to enhance recognition performance, and it has made significant progress. However, the relationships among AUs and other facial attributes are ignored. This study implements the AU occurrence detection task by multi-task learning instead of traditional single-task learning, so AU detection is beneficial from auxiliary facial attributes analysis. The main contributions of this study include: (1) This study constructs a multitask-based facial analysis system (MTFAS), which integrates several facial attributes (facial landmarks, head pose, gender, and emotion). (2) Due to the diversity of tasks, the features of lower and higher layers are combined to avoid the loss of information. (3) This work applies online difficult sample selection and weighted loss function to weaken the impact of imbalanced data. Experiments are conducted on well applied BP4D and DISFA databases, and the proposed MTFAS method is compared with state-of-art. MFTAS obtains the average F1 score of 0.622 and recognition accuracy of 0.787 on BP4D. On the DISFA dataset, MTFAS acquires the average F1 score of 0.600 and recognition accuracy of 0.909.
0 Replies

Loading