Does Graph Prompt Work? A Data Operation Perspective with Theoretical Analysis

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: we offer theoretical proof of why and how much graph prompting works
Abstract: In recent years, graph prompting has emerged as a promising research direction, enabling the learning of additional tokens or subgraphs appended to original graphs without requiring retraining of pre-trained graph models across various applications. This novel paradigm, shifting from the traditional "pre-training and fine-tuning" to "pre-training and prompting," has shown significant empirical success in simulating graph data operations, with applications ranging from recommendation systems to biological networks and graph transferring. However, despite its potential, the theoretical underpinnings of graph prompting remain underexplored, raising critical questions about its fundamental effectiveness. The lack of rigorous theoretical proof of why and how much it works is more like a "dark cloud" over the graph prompting area for deeper research. To fill this gap, this paper introduces a theoretical framework that rigorously analyzes graph prompting from a data operation perspective. Our contributions are threefold: **First**, we provide a formal guarantee theorem, demonstrating graph prompts’ capacity to approximate graph transformation operators, effectively linking upstream and downstream tasks. **Second**, we derive upper bounds on the error of these data operations for a single graph and extend this discussion to batches of graphs, which are common in graph model training. **Third**, we analyze the distribution of data operation errors, extending our theoretical findings from linear graph models (e.g., GCN) to non-linear graph models (e.g., GAT). Extensive experiments support our theoretical results and confirm the practical implications of these guarantees.
Lay Summary: In recent years, "Graph Prompting" has gained attention as a way to adapt machine learning models to new tasks by making simple changes to graph data, without altering the model itself. However, its effectiveness has not been clearly understood or supported by theory. In our paper, we explore graph prompting from a "data operation" perspective and provide a theoretical explanation of why it works. We also validate our findings through extensive experiments, showing that graph prompting can be a powerful tool for real-world applications.
Link To Code: https://github.com/qunzhongwang/dgpw
Primary Area: Deep Learning->Graph Neural Networks
Keywords: graph prompting, graph neural networks
Submission Number: 7066
Loading