Interpretable Multimodal Emotion Recognition using Facial Features and Physiological Signals

Published: 01 Jun 2023, Last Modified: 10 Jun 2023DAI2023 OralPresentationReaders: Everyone
Abstract: This paper aims to demonstrate the importance and feasibility of fusing multimodal information for emotion recognition. It introduces a multimodal framework for emotion understanding by fusing the information from visual facial features and rPPG signals extracted from the input videos. An interpretability technique based on permutation feature importance analysis has also been implemented to compute the contributions of rPPG and visual modalities toward classifying a given input video into a particular emotion class. The experiments on IEMOCAP dataset demonstrate that the emotion classification performance improves by combining the complementary information from multiple modalities.
Article: pdf
Presentation: pdf
3 Replies

Loading