SPACETGN: Augmented Mini-Batch Negative Sampling for Continuous-Time Dynamic Graph Learning

27 Sept 2024 (modified: 19 Jan 2025)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Dynamic Graph Learning;Negative Sampling Strategy
Abstract: Continuous-Time Dynamic Graph (CTDG) learning has significantly advanced link prediction performance by leveraging random negative sampling and incorporating adaptive temporal information. Recent studies aim to improve performance by introducing random sampling to obtain hard negative samples, whose quality is limited by randomness, capturing few categories of negative samples, and leading to false positive (FP) and false negative (FN) problems. Here we present SPACETGN, a CTDG learning framework, with a augmented hard negative sampling mini-batches (AMNS) strategy and two new feature extraction strategies that derive space-temporal locality subgraph and historical occurrence information to emphasize the graph's temporal discriminative properties. The AMNS strategy sample mini-batches comprised of instances that are hard-to-distinguish (i.e., hard and true negatives with respect to each other) based on the target distribution, thereby effectively augmenting the discriminative features and the diversity of historical and inductive samples. Furthermore, to mitigate the challenges posed by false positives (FP) and false negatives (FN), our architecture SPACETGN employs a conceptually straightforward approach that investigates temporal subgraphs and historical interactions between source and destination nodes. This enables the model to leverage complex and historically accurate interactions among predicted entities. Our extensive evaluation of dynamic link prediction on seven state-of-the-practice datasets reveals that SPACETGN achieves state-of-the-art performance in most datasets, demonstrating its effectiveness in ameliorating model bias.
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9703
Loading