Abstract: In this paper, we propose novel learning methods for multi-task facial emotion recognition that simultaneously performs multiple facial emotion recognition tasks. It is often difficult to collect datasets with multiple completely annotated targets (i.e., fundamental emotion, valence-arousal, action unit intensity, etc.), so a technology is required that constructs a multi-task facial emotion recognition model from multiple single-target annotated datasets. To this end, previous studies have introduced a method called multi-task self-training, in which the multi-task learning is performed by supplementing missing targets with pseudo-targets generated using single-task models. However, the pseudo-targets are often not precise enough, so effective multi-task learning cannot be performed. To address this problem, our proposed method, called born-again multi-task self-training, refines the pseudo-targets via iterative born-again steps of the multi-task model, i.e., the pseudo-targets are regenerated using the pre-trained multi-task model. In addition, to enhance the born-again steps, the proposed method temporarily creates single-task models via fine-tuning of the pre-trained multi-task model. The temporal single-task models effectively regenerate precise pseudo-targets. In our experiments with three facial emotion recognition tasks, we demonstrate that the proposed method outperforms the conventional multi-task self-training.
Loading