Graph structure prompt learning: A novel methodology to improve performance of graph neural networks

Published: 2025, Last Modified: 22 Jan 2026Appl. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph neural networks (GNNs) are widely utilized for modeling graph data. However, most existing GNNs are trained in a task-driven manner, focusing on proximity learning, which fails to fully capture the intrinsic nature of the graph structure and results in suboptimal node and graph representations. To address this issue, we present a novel and generalized training method called graph structure prompt learning (GPL), inspired by prompt mechanisms in natural language processing, to enhance GNN training. GPL employs task-independent graph structure losses to encourage GNNs to learn intrinsic graph characteristics while simultaneously solving downstream tasks, thus producing higher-quality node and graph representations. Extensive experiments conducted on eleven real-world datasets and 20 GNNs demonstrate that GNNs trained with GPL significantly outperform their baselines in node classification, graph classification, and edge prediction tasks, achieving improvements of up to 10.28%, 16.5%, and 24.15%, respectively. By enabling GNN to capture the inherent structural prompts of graphs in GPL, GPL also mitigates the over-smooth issue and achieves new state-of-the-art performance benchmarks, paving the way for innovative research directions in GNNs with potential applications across various domains.
Loading