FASTRAIN-GNN: Fast and Accurate Self-Training for Graph Neural Networks

Published: 17 Apr 2023, Last Modified: 17 Apr 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Few-shot learning with Graph Neural Networks (GNNs) is an important challenge in expanding the remarkable success that GNNs have achieved. In the transductive node classification scenario, conventional supervised training methods for GNNs fail when only few labeled nodes are available. Self-training, wherein the GNN is trained in stages by augmenting the training data with a subset of the unlabeled data and the predictions of the GNN on this data (pseudolabels), has emerged as a promising approach to few-shot transductive learning. However, multi-stage self-training significantly increases the computational demands of GNN training. In addition, while the training set evolves considerably across the stages of self-training, the GNN architecture, graph topology and training hyperparameters are kept constant, adversely affecting the accuracy of the resulting model as well as the computational efficiency of training. To address this challenge, we propose FASTRAIN-GNN, a framework for efficient and accurate self-training of GNNs with few labeled nodes. FASTRAIN-GNN performs four main optimizations in each stage of self-training: (1) Sampling-based Pseudolabel Filtering removes nodes whose pseudolabels are likely to be incorrect from the enlarged training set. (2,3) Dynamic Sizing and Dynamic Regularization find the optimal network architecture and amount of training regularization in each stage of self-training, respectively, and (4) Progressive Graph Pruning removes selected edges between nodes in the training set to reduce the impact of over-smoothing. On few-shot node classification tasks using different GNN architectures, FASTRAIN-GNN produces models that are consistently more accurate (by up to 4.4%), while also substantially reducing the self-training time (by up to 2.1X) over the current state-of-the-art methods. Code is available at
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Added results on heterophilous graphs. Added error bars to figures and tables. Added discussion section detailing limitations of the proposed method and potential directions for future work. Added a section on hyperparameter tuning describing the hyperparameters used in our experiments, and how they were chosen.
Assigned Action Editor: ~Pin-Yu_Chen1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 750