IGLU: Efficient GCN Training via Lazy UpdatesDownload PDF

Published: 28 Jan 2022, Last Modified: 22 Oct 2023ICLR 2022 PosterReaders: Everyone
Keywords: Graph convolutional networks, Graph neural networks, Optimization, Lazy updates
Abstract: Training multi-layer Graph Convolution Networks (GCN) using standard SGD techniques scales poorly as each descent step ends up updating node embeddings for a large portion of the graph. Recent attempts to remedy this sub-sample the graph that reduces compute but introduce additional variance and may offer suboptimal performance. This paper develops the IGLU method that caches intermediate computations at various GCN layers thus enabling lazy updates that significantly reduce the compute cost of descent. IGLU introduces bounded bias into the gradients but nevertheless converges to a first-order saddle point under standard assumptions such as objective smoothness. Benchmark experiments show that IGLU offers up to 1.2% better accuracy despite requiring up to 88% less compute.
One-sentence Summary: IGLU is a novel lazy update-based optimization technique for accelerated GCN training with provable convergence guarantees
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 13 code implementations](https://www.catalyzex.com/paper/arxiv:2109.13995/code)
18 Replies