Stabilized Self-training with Negative Sampling on Few-labeled Graph DataDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Abstract: Graph neural networks (GNNs) are designed for semi-supervised node classification on graphs where only a small subset of nodes have class labels. However, under extreme cases when very few labels are available (e.g., 1 labeled node per class), GNNs suffer from severe result quality degradation. Specifically, we observe that existing GNNs suffer from unstable training process on few-labeled graph data, resulting to inferior performance on node classification. Therefore, we propose an effective framework, Stabilized self-training with Negative sampling (SN), which is applicable to existing GNNs to stabilize the training process and enhance the training data, and consequently, boost classification accuracy on graphs with few labeled data. In experiments, we apply our SN framework to two existing GNN base models (GCN and DAGNN) to get SNGCN and SNDAGNN, and evaluate the two methods against 13 existing solutions over 4 benchmarking datasets. Extensive experiments show that the proposed SN framework is highly effective compared with existing solutions, especially under settings with very few labeled data. In particular, on a benchmark dataset Cora with only 1 labeled node per class, while GCN only has 44.6% accuracy, SNGCN achieves 62.5% accuracy, improving GCN by 17.9%; SNDAGNN has accuracy 66.4%, improving that of the base model DAGNN (59.8%) by 6.6%.
6 Replies

Loading