Improving Multimodal fusion via Mutual Dependency MaximisationDownload PDF

17 Mar 2021 (modified: 31 Aug 2021)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: NLP, multimodal learning, sentiment analysis, mutual information
Abstract: Multimodal sentiment analysis is a trending area of research, and the multimodal fusion is one of is most active topic. Acknowledging humans communicate through a variety of channels (i.e visual, acoustic, linguistic), multimodal systems aim at integrating these different unimodal representations into one synthetic representation. So far, a consequent effort has been made on developing complex architectures allowing the fusion of these modalities. However, such systems are mainly trained by minimising simple losses such as $L1$ or cross-entropy. In this work, we investigate unexplored penalties and propose a set of new objectives that measure the dependency between modalities. We demonstrate that our new penalties lead to a consistent improvement (up to $4.3$ on accuracy) across a large variety of state-of-the-art models on two well-known sentiment analysis datasets: \texttt{CMU-MOSI} and \texttt{CMU-MOSEI}. Our method not only achieves a new SOTA on both datasets but also produces representations that are more robust to modality drops. Finally, a by-product of our methods includes a statistical network which can be used to interpret the high dimensional representations learnt by the model.
0 Replies

Loading