Decoding Emotional Silences: Reliable Multimodal Sentiment Analysis with Bipolar Uncertainty

Published: 2025, Last Modified: 21 Jan 2026ICME 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multimodal sentiment analysis is critical in many real-world applications like smart cities, healthcare, and human-computer interaction, where sentiment is conveyed through various modalities, including text, audio, and video. However, existing methods still face several critical challenges in mitigating the impact of random modality loss, particularly in preserving the reliability of emotional patterns. To address these limitations, we introduce UniMSA, a novel framework for multimodal sentiment analysis with missing modalities. It addresses the missing data issue by improving the bipolar emotional uncertainty learning. Our approach enhances the reliability of sentiment analysis by integrating both positive and negative emotional uncertainty estimations to recover emotional patterns in randomly missing modalities. Extensive experiments conducted on large-scale multimodal sentiment datasets demonstrate the effectiveness of UniMSA in comparison to state-of-the-art methods.
Loading