Ada2I: Enhancing Modality Balance for Multimodal Conversational Emotion Recognition

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Multimodal Emotion Recognition in Conversations (ERC) is a typical multimodal learning task in exploiting various data modalities concurrently. Prior studies on effective multimodal ERC encounter challenges in addressing modality imbalances and optimizing learning across modalities. Dealing with these problems, we present a novel framework named Ada2I, which consists of two inseparable modules namely Adaptive Feature Weighting (AFW) and Adaptive Modality Weighting (AMW) for feature-level and modality-level balancing respectively via leveraging both Inter- and Intra-modal interactions. Additionally, we introduce a refined disparity ratio as part of our training optimization strategy, a simple yet effective measure to assess the overall discrepancy of the model's learning process when handling multiple modalities simultaneously. Experimental results validate the effectiveness of Ada2I with state-of-the-art performance compared against baselines on three benchmark datasets including IEMOCAP, MELD, and CMU-MOSEI, particularly in addressing modality imbalances.
Primary Subject Area: [Content] Multimodal Fusion
Secondary Subject Area: [Engagement] Emotional and Social Signals
Relevance To Conference: This work significantly contributes to the field of multimedia and multimodal processing by introducing a novel framework designed to address modality imbalances in Multimodal Emotion Recognition in Conversations (ERC). By integrating intra-modal representations and inter-modal balancing mechanisms, our approach enhances the effectiveness of multimodal processing. The Adaptive Feature Weighting (AFW) and Adaptive Modality Weighting (AMW) modules play a pivotal role in optimizing the representation and learning weights across different modalities, thereby improving the model's ability to recognize emotions from diverse data sources such as text, audio, and visual cues. Furthermore, our proposed refinement of the disparity ratio metric as part of the Training Optimization Strategy enables a comprehensive evaluation of the model's learning dynamics, facilitating better understanding and comparison of multimodal processing approaches. Through extensive experimentation on benchmark datasets like IEMOCAP, MELD, and CMU-MOSEI, our approach demonstrates state-of-the-art performance, particularly in addressing modality imbalances. Overall, this work represents a significant advancement in multimodal processing research, offering practical solutions to the challenges posed by modality imbalances in ERC tasks.
Supplementary Material: zip
Submission Number: 5299
Loading