Dynamic Tuning and Multi-Task Learning Based Model for Multimodal Sentiment Analysis

ACL ARR 2024 June Submission795 Authors

13 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Multimodal sentiment analysis aims to uncover human affective states by integrating data from multiple sensory sources. However, previous studies have focused on optimizing the model architecture, neglecting the impact of objective function settings on model performance. Given this, this study introduces a new framework - DMMSA, which integrates uni and multimodal sentiment analysis tasks, utilizes the intrinsic correlation of sentimental signals, and enhances the model's understanding of complex sentiments. In addition, it reduces task complexity by incorporating coarse-grained sentiment analysis. Meanwhile, the framework embeds a contrastive learning mechanism within the modality, enhancing the ability to distinguish between similar and dissimilar features. We conducted experiments on CH-SIMS, MOSI, and MOEI. The results showed that DMMSA outperformed the baseline method in classification and regression tasks when the model structure was unchanged, and only the optimization objectives were replaced.
Paper Type: Long
Research Area: Sentiment Analysis, Stylistic Analysis, and Argument Mining
Research Area Keywords: applications; sentiment analysis;
Contribution Types: NLP engineering experiment, Theory
Languages Studied: English, Chinese
Submission Number: 795
Loading