Keywords: Temporal Network, Representation Learning, Graph Neural Networks
TL;DR: We propose a scalable and efficient framework for learning temporal graph representations which employs a novel "forward recent sampling" strategy to aggregate historical node interactions.
Abstract: Temporal graph representation learning (TGRL) is essential for modeling dynamic systems in real-world networks. However, traditional TGRL methods, despite their effectiveness, often face significant computational challenges and inference delays due to the inefficient sampling of temporal neighbors. Conventional sampling methods typically involve backtracking through the interaction history of each node. In this paper, we propose a novel TGRL framework, No-Looking-Back (NLB), which overcomes these challenges by introducing a forward recent sampling strategy. This strategy eliminates the need to backtrack through historical interactions by utilizing a GPU-executable, size-constrained hash table for each node. The hash table records a down-sampled set of recent interactions, enabling rapid query responses with minimal inference latency. The maintenance of this hash table is highly efficient, operating with $O(1)$ complexity. Fully compatible with GPU processing, NLB maximizes programmability, parallelism, and power efficiency. Empirical evaluations demonstrate that NLB not only matches or surpasses state-of-the-art methods in accuracy for tasks like link prediction and node classification across six real-world datasets but also achieves 1.32-4.40$\times$ faster training, 1.2-7.94$\times$ greater energy efficiency, and 1.63-12.95$\times$ lower inference latency compared to competitive baselines.
Supplementary Materials: zip
Submission Type: Full paper proceedings track submission (max 9 main pages).
Software: https://github.com/Graph-COM/NLB
Poster: jpg
Poster Preview: jpg
Submission Number: 27
Loading