Temporal graph models fail to capture global temporal dynamics

Published: 20 Oct 2023, Last Modified: 23 Nov 2023TGL Workshop 2023 LongPaperEveryoneRevisionsBibTeX
Keywords: Temporal Graph Modelling, Temporal Graph Benchmarking
TL;DR: Existing temporal graph models fail to learn global temporal effects & are outperformed by a very simple baseline - improved metrics and training schemes proposed.
Abstract: A recently released Temporal Graph Benchmark is analyzed in the context of Dynamic Link Property Prediction. We outline our observations and propose a trivial optimization-free baseline of "recently popular nodes" outperforming other methods on medium and large-size datasets in the Temporal Graph Benchmark. We propose two measures based on Wasserstein distance which can quantify the strength of short-term and long-term global dynamics of datasets. By analyzing our unexpectedly strong baseline, we show how standard negative sampling evaluation can be unsuitable for datasets with strong temporal dynamics. We also show how simple negative-sampling can lead to model degeneration during training, resulting in impossible to rank, fully saturated predictions of temporal graph networks. We propose improved negative sampling schemes for both training and evaluation and prove their usefulness. We conduct a comparison with a model trained non-contrastively without negative sampling. Our results provide a challenging baseline and indicate that temporal graph network architectures need deep rethinking for usage in problems with significant global dynamics, such as social media, cryptocurrency markets or e-commerce. We open-source the code for baselines, measures and proposed negative sampling schemes.
Format: Long paper, up to 8 pages. If the reviewers recommend it to be changed to a short paper, I would be willing to revise my paper to fit within 4 pages.
Submission Number: 13
Loading