Abstract: The rise of social media and the exponential growth of multimodal communication necessitates advanced techniques for Multimodal Information Extraction (MIE). However, existing methodologies primarily rely on direct Image-Text interactions, a paradigm that often face the significant challenges due to semantic and modality gaps between images and text. In this paper, we introduce a new paradigm of Image-Context-Text interaction, where large multimodal models (LMMs) are utilized to generate descriptive textual context to bridge these gaps. In line with this paradigm, we propose a novel Shapley Value-based Contrastive Alignment (Shap-CA) method, which aligns both context-text and context-image pairs. Shap-CA initially applies the Shapley value concept from cooperative game theory to assess the individual contribution of each element in the set of contexts, texts and images towards total semantic and modality overlaps. Following this quantitative evaluation, a contrastive learning strategy is employed to enhance the interactive contribution within context-text/image pairs, while minimizing the influence across these pairs. Furthermore, we design an adaptive fusion module for selective cross-modal fusion. Extensive experiments across four MIE datasets demonstrate that our method significantly outperforms existing state-of-the-art methods. Code will be released upon acceptance.
Primary Subject Area: [Content] Multimodal Fusion
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: This paper presents an innovative approach for Multimodal Information Extraction (MIE) by pioneering the Image-Context-Text interaction paradigm. Instead of directly aligning images and text, which often results in inconsistencies due to the coexistence of semantic and modality gaps, this paradigm leverages Large Multimodal Models to generate descriptive textual context, serving as an intermediary to bridge the semantic and modality gaps between images and text. This approach is inherently multimodal, as it involves the fusion and understanding of both textual and visual data.
Furthermore, we introduce a novel method, Shapley Value-based Contrastive Alignment (Shap-CA), which aligns both context-text and context-image pairs. Shap-CA determines the individual contribution of each element in the set of contexts, texts and images towards the total semantic/modality overlaps and then uses a contrastive learning strategy to maximize the contributions from relevant pairs and minimize those of irrelevant ones. This method not only enhances the understanding of multimodal content but also optimizes the representation learning process.
Lastly, we design an adaptive fusion module for selective cross-modal fusion, which further emphasizes the multimodal nature of our work. The proposed method significantly outperforms existing state-of-the-art methods on four MIE datasets, demonstrating its effectiveness and potential for enhancing MIE.
Supplementary Material: zip
Submission Number: 3607
Loading