GFMate: Empowering Graph Foundation Models with Pre-training-agnostic Test-time Prompt Tuning

08 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Prompt Tuning, Graph Foundation Models, Test Time Prompt Tuning
Abstract: Graph prompt tuning has shown great potential in graph learning by introducing trainable prompts to enhance the model performance in conventional single-domain scenarios. Recent research has extended graph prompt methods to Graph Foundation Models (GFMs), aiming to improve their cross-domain generalisability from source domains to an unseen target domain by tuning auxiliary prompts using few-shot samples. Despite their progress, most existing GFM prompt methods embed domain-specific information from source domains into prompts, which serve either as input to GFMs or encoded during the GFM pre-training process. This entanglement of prompts with specific source domains and particular GFM pre-training strategy restricts their generalisability to target domains and different GFMs. Furthermore, existing methods merely rely on few-shot data for prompt tuning, neglecting the rich information in unlabelled target domain test data. Motivated by these insights, this paper aims to empower GFMs with a pre-training-agnostic test-time graph prompt tuning framework, named GFMate. GFMate introduces a centroid prompt and a layer prompt applied after pre-training on target domains, avoiding entanglement with the source domains and model pre-training. In addition, a test-time complementary learning objective is devised to exploit both labelled and unlabelled target domain data for effective test-time prompt tuning. Extensive experiments on 12 benchmark datasets across diverse domains demonstrate the superior performance and efficiency of GFMate, achieving improvements of up to 30.63\%. Code will be released upon acceptance.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 2939
Loading