Keywords: Network Embedding, Link Prediction, Graph Neural Network
Abstract: Due to their impressive performance across a wide range of graph-related tasks, graph neural networks (GNNs) have emerged as the dominant approach to link prediction, often assumed to outperform network embedding methods. However, their performance is hindered by the training–inference discrepancy and a strong reliance on high-quality node features. In this paper, we revisit classical network embedding methods within a unified training framework and highlight their conceptual continuity with the paradigms used in GNN-based link prediction. We further conduct an extensive empirical evaluation of three classical methods (LINE, DeepWalk, and node2vec) on standard link prediction benchmarks. Our findings suggest that the reported superiority of GNNs may be overstated, partly due to inconsistent training protocols and suboptimal hyperparameter choices for embedding-based methods. Notably, when incorporated into a modern link prediction framework with minimal configuration changes, these classical methods achieve state-of-the-art performance on both undirected and directed tasks. Despite being proposed nearly a decade ago, they outperform recent GNN models on 13 of 16 benchmark datasets. These results highlight the need for more rigorous and equitable evaluation practices in graph learning research.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 16727
Loading