DyG2Vec: Efficient Representation Learning for Dynamic Graphs

Published: 08 Jan 2024, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Temporal graph neural networks have shown promising results in learning inductive representations by automatically extracting temporal patterns. However, previous works often rely on complex memory modules or inefficient random walk methods to construct temporal representations. To address these limitations, we present an efficient yet effective attention-based encoder that leverages temporal edge encodings and window-based subgraph sampling to generate task-agnostic embeddings. Moreover, we propose a joint-embedding architecture using non-contrastive SSL to learn rich temporal embeddings without labels. Experimental results on 7 benchmark datasets indicate that on average, our model outperforms SoTA baselines on the future link prediction task by 4.23% for the transductive setting and 3.30% for the inductive setting while only requiring 5-10x less training/inference time. Lastly, different aspects of the proposed framework are investigated through experimental analysis and ablation studies. The code is publicly available at https://github.com/huawei-noah/noah-research/tree/master/graph_atlas.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Resolved AC comments: (1) Downplayed SSL in the title, and throughout the paper, moved some SSL experiments to the appendix (See A.2.5-6), (2) Added exp to explain why DyG2Vec beats baselines (table 4). (3) Added exp where temporal edge encodings are added to baselines (table 6). (4) improved readability throughout the paper. (5) added link to public repo.
Code: https://github.com/huawei-noah/noah-research/tree/master/graph_atlas
Assigned Action Editor: ~Giannis_Nikolentzos1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1610
Loading