CREL: Causal Retrieval Evidential Learning for Robust Multimodal Sentiment Analysis with Missing Modalities

ACL ARR 2026 January Submission2313 Authors

02 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multimodal Robust Learning, Multimodal Sentiment Analysis, Missing Modalities
Abstract: Multimodal Sentiment Analysis (MSA) in real-world deployment often encounters modality missingness due to sensor failures, privacy masking, or transmission loss. Most existing methods complete missing semantics through continuous completion or robustness reweighting within a closed world correlation model, which tends to produce mean regression artifacts under one-to-many affective mappings and to output overconfident softmax point estimates when evidence is scarce. To address these issues, we propose Causal Retrieval Evidential Learning (CREL), which reframes modality completion as evidence reasoning by employing Retrieval Augmented Interaction (RAI) to query discrete semantic evidence from a Prototype-based Multimodal Knowledge Base (MKB), purifying the retrieved cues against spurious correlations via Causal De-confounding Adjustment (CDA), and explicitly modeling prediction confidence through Evidential Uncertainty Calibration (EUC) grounded in subjective logic to reflect evidence sufficiency. Extensive experiments on benchmark datasets show that CREL achieves SOTA performance under missing modalities, and ablation studies further highlight the contribution of each designed component.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: Multimodal Robust Learning, Multimodal Sentiment Analysis, Missing Modalities
Languages Studied: english
Submission Number: 2313
Loading