Multi-Domain Graph Foundation Models: Robust Knowledge Transfer via Topology Alignment

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose MDGFM, a foundation model that robustly transfers knowledge across diverse graph domains by aligning their topologies and adapting via prompting.
Abstract: Recent advances in CV and NLP have inspired researchers to develop general-purpose graph foundation models through pre-training across diverse domains. However, a fundamental challenge arises from the substantial differences in graph topologies across domains. Additionally, real-world graphs are often sparse and prone to noisy connections and adversarial attacks. To address these issues, we propose the Multi-Domain Graph Foundation Model (MDGFM), a unified framework that aligns and leverages cross-domain topological information to facilitate robust knowledge transfer. MDGFM bridges different domains by adaptively balancing features and topology while refining original graphs to eliminate noise and align topological structures. To further enhance knowledge transfer, we introduce an efficient prompt-tuning approach. By aligning topologies, MDGFM not only improves multi-domain pre-training but also enables robust knowledge transfer to unseen domains. Theoretical analyses provide guarantees of MDGFM's effectiveness and domain generalization capabilities. Extensive experiments on both homophilic and heterophilic graph datasets validate the robustness and efficacy of our method.
Lay Summary: Graphs are powerful tools for representing complex systems like social networks or scientific data. But when these graphs come from very different sources—like Facebook friends versus academic papers—it becomes hard for machines to understand and transfer knowledge between them. Our research introduces **MDGFM** to help computers learn from multiple kinds of graphs and apply that knowledge to unfamiliar domains. Instead of using fixed structures, MDGFM learns to refine and align the graph's layout, removing noise and highlighting meaningful patterns. It also uses a special prompting strategy to adapt what it learned to new types of graphs. This makes it more flexible and accurate when dealing with new or messy data. We test MDGFM on various graph types and find that it outperforms current methods, even when it sees very little new data or is attacked by fake connections.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Graph Foundation Model; Multi-domain Pre-training;Graph Transfer learning; Graph Domain Generalization
Submission Number: 10695
Loading