Leveraging Classical Algorithms for Graph Neural Networks

Published: 23 Oct 2025, Last Modified: 08 Nov 2025LOG 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Neural Algorithmic Reasoning
Abstract: Neural networks excel at processing unstructured data but often fail to generalise out of distribution, whereas classical algorithms guarantee correctness but lack flexibility. We explore whether pretraining Graph Neural Networks (GNNs) on classical algorithms can improve their performance on molecular property prediction tasks from the Open Graph Benchmark: $\textit{ogbg-molhiv}$ (HIV inhibition) and $\textit{ogbg-molclintox}$ (clinical toxicity). GNNs trained on 24 classical algorithms from the CLRS Algorithmic Reasoning Benchmark are used to initialise and freeze selected layers of a second GNN for molecular prediction. Compared to a randomly initialised baseline, the pretrained models achieve consistent wins or ties, with the $\textit{Segments Intersect}$ algorithm pre-training yielding a 6\% absolute gain on $\textit{ogbg-molhiv}$ and $\textit{Dijkstra}$ pre-training achieving a 3\% gain on $\textit{ogbg-molclintox}$. These results demonstrate embedding classical algorithmic priors into GNNs provides useful inductive biases, boosting performance on complex, real-world graph data.
Submission Type: Extended abstract (max 4 main pages).
Poster: png
Poster Preview: png
Submission Number: 40
Loading