Asynchronous Algorithmic Alignment with Cocycles

Published: 18 Jun 2023, Last Modified: 02 Jul 2023TAGML2023 PosterEveryoneRevisions
Keywords: algorithmic reasoning, graph neural networks, category theory, bellman-ford, cocycles, dynamic programming
TL;DR: We contribute to the theory of algorithmic alignment, by relaxing the constraint that computation needs to be synchronous. Indeed, target algorithms are often highly asynchronous, so there's no need that (G)NNs need to be either.
Abstract: State-of-the-art neural algorithmic reasoners make use of message passing in graph neural networks (GNNs). But typical GNNs blur the distinction between the definition and invocation of the message function, forcing a node to send messages to its neighbours at every layer, synchronously. When applying GNNs to learn to execute dynamic programming algorithms, however, on most steps only a handful of the nodes would have meaningful updates to send. One, hence, runs the risk of inefficiencies by sending too much irrelevant data across the graph---with many intermediate GNN steps having to learn identity functions. In this work, we explicitly separate the concepts of node state update and message function invocation. With this separation, we obtain a mathematical formulation that allows us to reason about asynchronous computation in both algorithms and neural networks.
Supplementary Materials: pdf
Type Of Submission: Extended Abstract (4 pages, non-archival)
Submission Number: 74
Loading