DAGPrompT: Pushing the Limits of Graph Prompting with a Distribution-aware Graph Prompt Tuning Approach

Published: 29 Jan 2025, Last Modified: 29 Jan 2025WWW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Graph algorithms and modeling for the Web
Keywords: graph neural networks, graph prompting, few-shot learning
TL;DR: We propose DAGPrompT that push the limits of graph prompting methods to graphs with complex distributions.
Abstract:

The "pre-training then fine-tuning" paradigm has advanced Graph Neural Networks (GNNs) by enabling the capture of general knowledge without task-specific labels. However, a significant objective gap between pre-training and downstream tasks limits their effectiveness. Recent graph prompting methods aim to bridge this gap by task reformulations and learnable prompts. Yet, they struggle with complex graphs like heterophily graphs—freezing the GNN encoder may diminish prompting effectiveness, and simple prompts fail to capture diverse hop-level distributions. This paper identifies two key challenges in adapting graph prompting methods for complex graphs: (i) adapting the model to new distributions in downstream tasks to mitigate pre-training and fine-tuning discrepancies from heterophily and (ii) customizing prompts for hop-specific node requirements. To overcome these challenges, we propose Distribution-aware Graph Prompt Tuning (DAGPrompT), which integrates a GLoRA module for optimizing the GNN encoder’s projection matrix and message-passing schema through low-rank adaptation. DAGPrompT also incorporates hop-specific prompts accounting for varying graph structures and distributions among hops. Evaluations on 10 datasets and 14 baselines demonstrate that DAGPrompT improves accuracy by up to 7.55% in node and graph classification tasks, setting a new state-of-the-art while preserving efficiency. We provide our code and data via AnonymousGithub.

Submission Number: 782
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview