On the Generalization of Temporal Graph Learning with Theoretical Insights

17 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: This paper studies the generalization ability of different temporal graph algorithms.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Temporal graph learning (TGL) is a widely-used technique in various real-world applications, but its theoretical foundations remain largely under-explored. In this paper, we fill in this gap by studying the generalization ability of different TGL algorithms (e.g., GNN-based, RNN-based, and memory-based methods) under the finite-wide over-parameterized regime. We establish the connection between the generalization error of TGL algorithms and \circled{1} "\textit{the number of layers/steps}" in the GNN-/RNN-based TGL methods and \circled{2} "\textit{the feature-label alignment (FLA) score}", where FLA can be used as a proxy for the expressive power and explains the performance of memory-based methods. Guided by our theoretical analysis, we propose \textit{\textbf{S}implified-\textbf{T}emp\textbf{o}ral-Graph-\textbf{Ne}twork} (SToNe), which simultaneously enjoys a small generalization error, the better overall performance, and a lower model complexity. Extensive experiments on real-world datasets demonstrate the effectiveness of SToNe. This paper provides critical insights into TGL from a theoretical perspective and paves the way for designing practical TGL algorithms in future studies.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 947
Loading