Link Prediction with Untrained Message Passing Layers

Published: 23 Oct 2025, Last Modified: 06 Nov 2025LOG 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: link prediction, graph neural networks, untrained message passing layers, path-based similarity measures
TL;DR: Untrained message passing layers in graph neural networks outperform trained counterparts for link prediction, offering efficiency and interpretability, especially with high-dimensional features.
Abstract: In this work, we explore the use of untrained message passing layers in graph neural networks for link prediction. The untrained message passing layers we consider are derived from widely used graph neural network architectures by removing trainable parameters and nonlinearities in their respective message passing layers. Experimentally, we find that untrained message passing layers can lead to competitive and even superior link prediction performance compared to fully trained message passing layers while being more efficient, especially in the presence of high-dimensional features. We also provide a theoretical analysis of untrained message passing layers in the context of link prediction and show that the inner product of features produced by untrained message passing layers relate to common neighbour and path-based topological measures which are widely used for link prediction. As such, untrained message passing layers offer a more efficient alternative to trained message passing layers in link prediction tasks, with clearer theoretical links to classical path-based heuristics.
Software: https://doi.org/10.5281/zenodo.15019863
Poster: jpg
Poster Preview: jpg
Submission Type: Full paper proceedings track submission (max 9 main pages).
Submission Number: 25
Loading