Abstract: Multimodal sentiment analysis (MSA) leverages heterogeneous data sources to interpret the complex nature of human sentiments. Despite significant progress in multimodal architecture design, the field lacks comprehensive regularization methods. This paper introduces $\mathcal {P}$owMix, a versatile embedding space regularizer that builds upon the strengths of unimodal mixing-based regularization approaches and introduces novel algorithmic components that are specifically tailored to multimodal tasks. $\mathcal {P}$owMix is integrated before the fusion stage of multimodal architectures and facilitates intra-modal mixing, such as mixing text with text, to act as a regularizer. $\mathcal {P}$owMix consists of five components: 1) a varying number of generated mixed examples, 2) mixing factor reweighting, 3) anisotropic mixing, 4) dynamic mixing, and 5) cross-modal label mixing. Extensive experimentation across benchmark MSA datasets and a broad spectrum of diverse architectural designs demonstrate the efficacy of $\mathcal {P}$owMix, as evidenced by consistent performance improvements over baselines and existing mixing methods. An in-depth ablation study highlights the critical contribution of each $\mathcal {P}$owMix component and how they synergistically enhance performance. Furthermore, algorithmic analysis demonstrates how $\mathcal {P}$owMix behaves in different scenarios, particularly comparing early versus late fusion architectures. Notably, $\mathcal {P}$owMix enhances overall performance without sacrificing model robustness or magnifying text dominance. It also retains its strong performance in situations of limited data. Our findings position $\mathcal {P}$owMix as a promising versatile regularization strategy for MSA.
Loading