Graph Neural Networks Gone Hogwild

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: graph neural network
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Graph neural networks (GNNs) constitute a dominant class of architectures for modelling graph-structured data. Message-passing GNNs in particular appear to be ideal for applications where distributed inference is desired, since node updates can be performed locally. Implementing distributed inference of GNNs on enormous graphs is a conspicuous example of such an application. In this work, we are particularly motivated by the view that GNNs can be interpreted as parametric communication policies between agents which collectively solve a distributed optimization problem (e.g., in robotic swarms or sensor networks). For these applications, node synchrony and central control are undesirable, since they result in communication bottlenecks and reduce fault tolerance and scalability. We examine GNN inference under asynchrony, and find that most GNNs generate arbitrarily incorrect predictions in this regime. A notable exception is GNNs which cast message passing as a fixed point iteration with contractive update functions. We propose a novel GNN architecture, energy GNNs, in which node embeddings are computed by minimizing a scalar-valued convex "energy" function. By framing message passing as convex optimization, we unlock a richer class of update functions which preserve robustness under asynchronous execution. We show that, empirically, we outperform other GNNs which are amenable to asynchronous execution on a multitude of tasks across both synthetic and real-world datasets.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: pdf
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8497
Loading