Abstract: Multimodal emotion recognition has immense potential for the comprehensive assessment of human emotions, utilizing multiple modalities that often exhibit complementary relationships. In video-based emotion recognition, audio and visual modalities have emerged as prominent contact-free channels, widely explored in existing literature. Current approaches typically employ cross-modal attention mechanisms between audio and visual modalities, assuming a constant state of complementarity. However, this assumption may not always hold true, as non-complementary relationships can also manifest, undermining the efficacy of cross-modal feature integration and thereby diminishing the quality of audio-visual feature representations. To tackle this problem, we introduce a novel Incongruity-Aware Cross-Attention (IACA) model, capable of harnessing the benefits of robust complementary relationships while efficiently managing non-complementary scenarios. Specifically, our approach incorporates a two-stage gating mechanism designed to adaptively select semantic features, thereby effectively capturing the inter-modal associations. Additionally, the proposed model demonstrates an ability to mitigate the adverse effects of severely corrupted or missing modalities. We rigorously evaluate the performance of the proposed model through extensive experiments conducted on the challenging RECOLA and Aff-Wild2 datasets. The results underscore the efficacy of our approach, as it outperforms state-of-the-art methods by adeptly capturing inter-modal relationships and minimizing the influence of missing or heavily corrupted modalities. Furthermore, we show that the proposed model is compatible with various cross-modal attention variants, consistently improving performance on both datasets.
Loading