On Locality in Graph Learning via Graph Neural NetworkDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Graph Neural Network, Structural Behavior, Learning Process
Abstract: Theoretical understanding on the learning process of graph neural network (GNN) has been lacking. The common practice in GNN training is to adapt strategies from other machine learning families, despite the striking differences between learning non-graph and graph-structured data. This results in unstable learning performance (e.g., accuracy) for GNN. In this paper, we study how the training set in the input graph effects the performance of GNN. Combining the topology awareness of GNN and the dependence (topology) of data samples, we formally derive a structural relation between the performance of GNN and the coverage of the training set in the graph. More specifically, the distance of the training set to the rest of the vertexes in the graph is negatively correlated to the learning outcome of GNN. We further validate our theory on different graph data sets with extensive experiments. Using the derived result as a guidance, we also investigate the initial data labelling problem in active learning of GNN, and show that locality-aware data labelling substantially outperforms the prevailing random sampling approach.
One-sentence Summary: This paper studies the structural relation between training set and performance of GNN.
16 Replies

Loading