Multimodal dual perception fusion framework for multimodal affective analysis

Published: 01 Jan 2025, Last Modified: 14 May 2025Inf. Fusion 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•We propose a unified multimodal framework to streamline multimodal classification.•By generating image description knowledge to enrich the multimodal semantic space.•A contrastive learning is designed to achieve underlying semantics and interaction.•Developing a multimodal dual perception module to model congruity and incongruity.•Experimental results and analyses demonstrate the superiority of our model.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview