Graph Neural Networks are Dynamic ProgrammersDownload PDF

Published: 25 Mar 2022, Last Modified: 05 May 2023GTRL 2022 PosterReaders: Everyone
Keywords: algorithmic reasoning, graph neural networks, category theory, bellman-ford, integral transform, pullback, pushforward, monads, message passing, dynamic programming
TL;DR: We use category theory and abstract algebra to further uncover the relationship between graph neural nets and dynamic programming, which was previously done handwavily over specific examples.
Abstract: Recent advances in neural algorithmic reasoning with graph neural networks (GNNs) are propped up by the notion of algorithmic alignment. Broadly, a neural network will be better at learning to execute a reasoning task (in terms of sample complexity) if its individual components align well with the target algorithm. Specifically, GNNs are claimed to align with dynamic programming (DP), a general problem-solving strategy which expresses many polynomial-time algorithms. However, has this alignment truly been demonstrated and theoretically quantified? Here we show, using methods from category theory and abstract algebra, that there exists an intricate connection between GNNs and DP, going well beyond the initial observations over individual algorithms such as Bellman-Ford. Exposing this connection, we easily verify several prior findings in the literature, and hope it will serve as a foundation for building stronger algorithmically aligned GNNs.
Poster: png
1 Reply

Loading