Graph4MM: Weaving Multimodal Learning with Structural Information

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Real-world multimodal data usually exhibit complex structural relationships beyond traditional one-to-one mappings like image-caption pairs. Entities across modalities interact in intricate ways, with images and text forming diverse interconnections through contextual dependencies and co-references. Graphs provide powerful structural information for modeling intra-modal and inter-modal relationships. However, previous works fail to distinguish multi-hop neighbors and treat the graph as a standalone modality, which fragments the overall understanding. This limitation presents two key challenges in multimodal learning: (1) integrating structural information from multi-hop neighbors into foundational models, and (2) fusing modality-specific information in a principled manner. To address these challenges, we revisit the role of graphs in multimodal learning within the era of foundation models and propose Graph4MM, a graph-based multimodal learning framework. To be specific, we introduce Hop-Diffused Attention, which integrates multi-hop structural information into self-attention through causal masking and hop diffusion. Furthermore, we design MM-QFormer, a multi-mapping querying transformer for cross-modal fusion. Through theoretical and empirical analysis, we show that leveraging structures to integrate both intra- and inter-modal interactions improves multimodal understanding beyond treating them as a standalone modality. Experiments on both generative and discriminative tasks show that Graph4MM outperforms larger VLMs, LLMs, and multimodal graph baselines, achieving a 6.93% average improvement.
Lay Summary: We often teach AI to understand images or text separately, or to match them in simple one-to-one ways, like pairing a product photo with its caption. But real-world content — like webpages, research papers, or shopping platforms — is much more complex. Images and texts are connected in many-to-many ways, across pages and sections. We built a new system called Graph4MM that uses graphs to help AI understand these complex structures. Each piece of content (an image, a paragraph, or a caption) becomes a node, and we connect them based on their relationships. Then, we introduce a technique called Hop-Diffused Attention that teaches the AI to reason across not just direct links, but also multi-step connections. This helps AI better understand and generate information from rich, structured content. On tasks like summarizing web content or classifying products without prior labels, Graph4MM outperforms even much larger models. Our work shows that bringing structure into foundation models can make them perform better.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Multi-modal Learning, Large Language Models, Graph Neural Networks
Submission Number: 8515
Loading