DAGPrompT: Pushing the Limits of Graph Prompting with a Distribution-aware Graph Prompt Tuning Approach
The "pre-training then fine-tuning" paradigm has advanced Graph Neural Networks (GNNs) by enabling the capture of general knowledge without task-specific labels. However, a significant objective gap between pre-training and downstream tasks limits their effectiveness. Recent graph prompting methods aim to bridge this gap by task reformulations and learnable prompts. Yet, they struggle with complex graphs like heterophily graphs—freezing the GNN encoder may diminish prompting effectiveness, and simple prompts fail to capture diverse hop-level distributions. This paper identifies two key challenges in adapting graph prompting methods for complex graphs: (i) adapting the model to new distributions in downstream tasks to mitigate pre-training and fine-tuning discrepancies from heterophily and (ii) customizing prompts for hop-specific node requirements. To overcome these challenges, we propose Distribution-aware Graph Prompt Tuning (DAGPrompT), which integrates a GLoRA module for optimizing the GNN encoder’s projection matrix and message-passing schema through low-rank adaptation. DAGPrompT also incorporates hop-specific prompts accounting for varying graph structures and distributions among hops. Evaluations on 10 datasets and 14 baselines demonstrate that DAGPrompT improves accuracy by up to 7.55% in node and graph classification tasks, setting a new state-of-the-art while preserving efficiency. We provide our code and data via AnonymousGithub.