DeeperGCN: All You Need to Train Deeper GCNsDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Graph Neural Networks, Graph Representation Learning
Abstract: Graph Neural Networks (GNNs) have been drawing significant attention to the power of representation learning on graphs. Recent works developed frameworks to train very deep GNNs. Such works show impressive results in tasks like point cloud learning and protein interaction prediction. In this work, we study the performance of such deep models in large-scale graph datasets from the Open Graph Benchmark (OGB). In particular, we look at the effect of adequately choosing an aggregation function and its effect on final performance. Common choices of aggregation are mean, max, and sum. It has been shown that GNNs are sensitive to such aggregations when applied to different datasets. We systematically study this point on large-scale graphs and propose to alleviate it by introducing a novel Generalized Aggregation Function. Proposed aggregation functions extend beyond the commonly used ones. The generalized aggregation functions are fully differentiable, and thus their parameters can be learned in an end-to-end fashion. We show that deep residual GNNs equipped with generalized aggregation functions achieve state-of-the-art results in several benchmarks from OGB across tasks and domains.
One-sentence Summary: We show that deep residual GNNs equipped with generalized aggregation functions achieve state-of-the-art results in several benchmarks from OGB across tasks and domains.
5 Replies

Loading