Redundancy-Free Computation Graphs for Graph Neural NetworksDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We present Hierarchically Aggregated computation Graphs (HAGs), a new GNN graph representation that explicitly avoids redundant computations in GNN training and inference.
Abstract: Graph Neural Networks (GNNs) are based on repeated aggregations of information across nodes’ neighbors in a graph. However, because common neighbors are shared between different nodes, this leads to repeated and inefficient computations.We propose Hierarchically Aggregated computation Graphs (HAGs), a new GNN graph representation that explicitly avoids redundancy by managing intermediate aggregation results hierarchically, and eliminating repeated computations and unnecessary data transfers in GNN training and inference. We introduce an accurate cost function to quantitatively evaluate the runtime performance of different HAGsand use a novel search algorithm to find optimized HAGs. Experiments show that the HAG representation significantly outperforms the standard GNN graph representation by increasing the end-to-end training throughput by up to 2.8x and reducing the aggregations and data transfers in GNN training by up to 6.3x and 5.6x. Meanwhile, HAGs improve runtime performance by preserving GNNcomputation, and maintain the original model accuracy for arbitrary GNNs.
Code: https://www.dropbox.com/sh/pce8j67byr6bq83/AACh0UKkua0toPM_ZjtTTVava?dl=0
Keywords: Graph Neural Networks, Runtime Performance
Original Pdf: pdf
11 Replies

Loading