Abstract: Developing an effective graph pretraining framework is a hot topic in current graph-related research, but its applications are still in an infant stage. One key reason is that models based on spatial graph convolutions struggle to manage complex edge patterns and intricate graph signals effectively. Unlike spatial methods, spectral graph neural networks can learn filters automatically according to edge patterns and show great potential transferring knowledge between graphs. Inspired by this, we propose a graph pretraining framework based on spectral methods to achieve positive transfer. To do this, we first use graph-specific mapping and classification layers to transform features for dimension alignment. Then, a novel spectral graph transferable model is proposed to capture edge patterns of source graphs and learn underlying transferable knowledge at the same time. Through transferring the pretrained model to downstream task, our method can achieve better performance compared with its backbone method on target graphs. Extensive experiments on six real-world datasets verify the effectiveness of our framework. By visualizing learnt filters, our method shows strong ability to extract distinct edge patterns from various graphs. Ablation experiments and hyper-parameter analysis are also conducted to evaluate our model.
External IDs:dblp:conf/pakdd/JinZLLZYLP25
Loading