Keywords: graph foundation models, multi-domain graph pre-training, graph transfer learning, graph prompt fine-tuning
TL;DR: We propose a novel generative graph vocabularies for robust GFM fine-tuning framework named GRAVER that tackles fine-tuning instability via generative augmentations with transferrable patterns.
Abstract: Inspired by the remarkable success of foundation models in language and vision, Graph Foundation Models (GFMs) hold significant promise for broad applicability across diverse graph tasks and domains. However, existing GFMs struggle with unstable few-shot fine-tuning, where both performance and adaptation efficiency exhibit significant fluctuations caused by the randomness in the support sample selection and structural discrepancies between the pre-trained and target graphs. How to fine-tune GFMs robustly and efficiently to enable trustworthy knowledge transfer across domains and tasks is the major challenge. In this paper, we propose **GRAVER**, a novel **G**enerative g**RA**ph **V**ocabulari**E**s for **R**obust GFM fine-tuning framework that tackles the aforementioned instability via generative augmentations. Specifically, to identify transferable units, we analyze and extract key class-specific subgraph patterns by ego-graph disentanglement and validate their transferability both theoretically and empirically. To enable effective pre-training across diverse domains, we leverage a universal task template based on ego-graph similarity and construct graph vocabularies via graphon-based generative experts. To facilitate robust and efficient prompt fine-tuning, we grave the support samples with in-context vocabularies, where the lightweight MoE-CoE network attentively routes knowledge from source domains. Extensive experiments demonstrate the superiority of GRAVER over **effectiveness**, **robustness**, and **efficiency** on downstream few-shot node and graph classification tasks compared with **15** state-of-the-art baselines.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 979
Loading