Pre-training Graph Neural Networks via Weighted Meta Learning

Published: 01 Jan 2024, Last Modified: 02 Aug 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent researches have demonstrated pre-training Graph Neural Networks (GNNs) via meta learning can enhance their performance on learning representations from unlabeled data. The main idea behind them is to learn the transferable priors that work across the distribution of tasks. However, existing methods often leverage a uniform task sampling strategy and ignore the relations between their original graphs and them. This may lead the model learning the redundant information in the pre-training process. In this work, we propose Meta Graph Neural Network (MGNN), a graph pre-training method via weighted meta learning. MGNN trys to learn the transferable experiences from diverse distributions without the side-effect of redundant information. First, we break up a task in traditional meta learning into several sub-tasks to construct the minimum evaluation units. Then, to fully utilize the graph information, we use a two-stage optimization with contrastive loss functions to learn the experience priors on node- and graph-levels in meta-training process. Third, to qualify redundant information, we design an evaluation module which calculates the mutual information in a graph view. Finally, we propose a new optimization objective in meta-testing process, which reduce the model’s attention to redundant information to alleviate the negative impact of them. To validate the effectiveness of the proposed method, extensive experiments are conducted on Cora, Citeseer and Pubmed datasets with several GNNs architectures. Experimental results show that our proposed method outperforms existing GNN pre-training algorithms.
Loading