AutoGFM: Automated Graph Foundation Model with Adaptive Architecture Customization

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We explore the problem of graph neural architecture search for GNN-based GFMs, aiming to automatically search customized GNN architectures tailored to data with different domains and tasks.
Abstract: Graph foundation models (GFMs) aim to share graph knowledge across diverse domains and tasks to boost graph machine learning. However, existing GFMs rely on hand-designed and fixed graph neural network (GNN) architectures, failing to utilize optimal architectures *w.r.t.* specific domains and tasks, inevitably leading to suboptimal performance in diverse graph domains and tasks. In this paper, we explore graph neural architecture search (GNAS) for GFMs for the first time, which suffers from the problem of *architecture inconsistency*, i.e., the optimal architectures for different tasks and domains vary. We tackle this problem by discovering an invariant graph-architecture relationship across domains and tasks, which imposes three challenges: i) how to capture invariant and variant patterns; ii) how to customize architectures to adapt to diverse domains and tasks; iii) how to mitigate the data domination phenomenon during the architecture search process. To address these challenges, we propose **Auto**mated **G**raph **F**oundation **M**odel with Adaptive Architecture Customization (**AutoGFM**), providing a theoretical analysis to demonstrate the limitations of existing GNAS. Specifically, we first propose a disentangled contrastive graph encoder to learn invariant and variant patterns. Then, we design an invariant-guided architecture customization strategy to customize architectures for data from diverse domains and tasks. Finally, we propose a curriculum architecture customization mechanism to mitigate the phenomenon of particular data dominating the search process. Extensive experiments demonstrate that **AutoGFM** outperforms baselines, achieving state-of-the-art performance.
Lay Summary: We want to make it easier for a graph neural network (GNN) to learn from many different kinds of graph data, such as social networks, molecules, or recommendation systems, without having to start from scratch each time. However, current methods often rely on fixed GNN architectures that don’t perform equally well across different tasks and data types. To address this, we propose a method called AutoGFM, which automatically adapts the GNN architecture based on the specific graph and task. Our method learns shared knowledge across different domains and tasks through a parameter-sharing model, while dynamically combining modular components to suit the unique needs of each individual case. This approach enables researchers to more easily use a single GNN across a wide variety of graph data, making graph learning more flexible, efficient, and accessible.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Graph Foundation Model, Text Attributed Graph, Graph Neural Architecture Search, Graph Neural Networks
Submission Number: 579
Loading