DVGMAE: Self-Supervised Dynamic Variational Graph Masked Autoencoder

Published: 2025, Last Modified: 29 Jan 2026IEEE Trans. Neural Networks Learn. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Although contrastive self-supervised learning (SSL) on dynamic graphs has made significant success, the issue of heavy reliance on data augmentation and training tricks has been a persistent pain point. Generative SSL, especially masked autoencoders (MAEs) have recently produced promising results and can avoid these issues. However, the research on MAE in dynamic graphs remains largely unexplored due to the following challenges: 1) how to design an effective masking strategy for dynamic graphs? and 2) how to design a decoder to retain temporal dependency when graphs are perturbed? In this article, we propose DVGMAE, a novel dynamic variational graph masked autoencoder model to solve these challenges. DVGMAE simultaneously captures the evolving behaviors and topological features via an innovative masking strategy and an elaborate decoder. Specifically, we first implement a temporal-aware masking strategy on the edges of each snapshot based on the updated probabilities derived from historical mask information. This strategy mitigates potential masking bias in dynamic graphs. We then design a globally enhanced decoder to recover the temporal and spatial information of each snapshot. Extensive experiments demonstrate that DVGMAE outperforms the existing state-of-the-art on various tasks across different datasets.
Loading