Abstract: The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph model for semi-supervised learning. Such a model, however, is transductive in nature because parameters are learned through convolutions with both training and test data. Moreover, the recursive neighborhood expansion across layers poses time and memory challenges for training with large, dense graphs. To relax the requirement of simultaneous availability of test data, we interpret graph convolutions as integral transforms of embedding functions under probability measures. Such an interpretation allows for the use of Monte Carlo approaches to consistently estimate the integrals, which in turn leads to a batched training scheme as we propose in this work---FastGCN. Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference. We show a comprehensive set of experiments to demonstrate its effectiveness compared with GCN and related models. In particular, training is orders of magnitude more efficient while predictions remain comparably accurate.
Keywords: Graph convolutional networks, importance sampling
Code: [![github](/images/github_icon.svg) matenure/FastGCN](https://github.com/matenure/FastGCN) + [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=rytstxWAW)
Data: [Citeseer](https://paperswithcode.com/dataset/citeseer), [Cora](https://paperswithcode.com/dataset/cora), [Pubmed](https://paperswithcode.com/dataset/pubmed), [Reddit](https://paperswithcode.com/dataset/reddit)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/fastgcn-fast-learning-with-graph/code)
28 Replies
Loading