Is Attention All You Need for Temporal Link Prediction? A Lightweight Alternative via Learnable Positional Encoding and MLPs
Keywords: Link Prediction, Graph Transformer, Positional Encoding
Abstract: Link prediction is of key importance in many real-world applications like social network analysis and recommender systems. To leverage the expressive power for achieving SOTA performance, many recent works adapt the attention mechanism to the structured data for link prediction, in which dense or relational attention is often unaffordable on large-scale structured data. Moreover, in a realistic setting, the time-evolving topological and feature information can raise more challenging questions about the efficiency and effectiveness of attention mechanisms. In spite of the expressive power, we discern that the attention mechanism may not always be as irreplaceable as expected for temporal graph representation learning, at least not for temporal link prediction tasks. Formally, we discover that some deliberately-designed simple positional encoding can enable MLPs to exploit attributed graph information to achieve SOTA performance than complex graph transformers. Hence, we propose a simple temporal link prediction model, named SimpleTLP. In detail, for SimpleTLP, we first propose to adapt Fourier Transform on temporal graphs for learning informative positional encoding, then we (1) prove this learning scheme can make positional encoding preserve the temporal graph topology from the spatial-temporal spectral viewpoint, (2) verify MLPs can fully exploit the expressiveness and reach and even surpass Transformers on that encoding, (3) change different initial positional encoding inputs to show robustness, (4) analyze the theoretical complexity and obtain less empirical running time than SOTA baselines, and (5) demonstrate its temporal link prediction out-performance in a comprehensive way on 13 classic datasets and with 10 algorithms in both transductive and inductive settings using 3 different sampling strategies. Also, SimpleTLP obtains the leading performance in the large-scale TGB benchmark (the newest TGB 2.0).
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9609
Loading