Towards Better Evaluation for Dynamic Link PredictionDownload PDF

Published: 17 Sept 2022, Last Modified: 23 May 2023NeurIPS 2022 Datasets and Benchmarks Readers: Everyone
Keywords: dynamic link prediction, evaluation, dynamic graph representation learning
TL;DR: In this paper we proposed tools to improve evaluation of dynamic link prediction including new datasets, new negative sampling strategies, and a strong baseline.
Abstract: Despite the prevalence of recent success in learning from static graphs, learning from time-evolving graphs remains an open challenge. In this work, we design new, more stringent evaluation procedures for link prediction specific to dynamic graphs, which reflect real-world considerations, to better compare the strengths and weaknesses of methods. First, we create two visualization techniques to understand the reoccurring patterns of edges over time and show that many edges reoccur at later time steps. Based on this observation, we propose a pure memorization-based baseline called EdgeBank. EdgeBank achieves surprisingly strong performance across multiple settings which highlights that the negative edges used in the current evaluation are easy. To sample more challenging negative edges, we introduce two novel negative sampling strategies that improve robustness and better match real-world applications. Lastly, we introduce six new dynamic graph datasets from a diverse set of domains missing from current benchmarks, providing new challenges and opportunities for future research. Our code repository is accessible at https://github.com/fpour/DGB.git.
Supplementary Material: pdf
Contribution Process Agreement: Yes
In Person Attendance: Yes
URL: https://github.com/fpour/DGB
Dataset Url: https://zenodo.org/record/7213796#.Y8QicOzMJB2
License: All datasets are publicly available under MIT license or Apache License 2.0. Our code is also available under MIT license.
Author Statement: Yes
18 Replies

Loading