CSGNN: Improving Graph Neural Networks with Contrastive Semi-supervised Learning

Published: 01 Jan 2022, Last Modified: 14 May 2025DASFAA (1) 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The Graph Neural Network (GNN) is a rising graph analysis model family that encodes node features into low-dimensional representation vectors by aggregating local neighbor information. Nevertheless, the performance of GNNs is limited since GNNs are trained only over predictions of the labeled data. Hence, effectively incorporating a great number of unlabeled nodes into GNNs will upgrade the performance of GNNs. To address this issue, we propose a Contrastive Semi-supervised learning based GNN (CSGNN) that improves the GNN from extra supervision predicted by contrastive learning. Firstly, CSGNN utilizes multi-loss contrast to learn node representations via maximizing the agreement between nodes, edges and labels of different views. Then, a semi-supervised fine-tuner learns from few labeled examples while making the best use of unlabeled nodes. Finally, we introduce the knowledge distillation based on label reliability, which further distills the node labels predicted by contrastive learning into the GNN. Experimentally, CSGNN effectively improves the classification performance of GNNs and outperforms other state-of-the-art methods in accuracy over a variety of real-world datasets.
Loading