Abstract: Unsupervised Graph Contrastive Learning (GCL) aims to derive graph representations for downstream tasks without labeled data. While GCL methods have made significant progress, they suffer from limitations including noise amplification and neglecting global structural and semantic information. In this paper, we propose Diffusion Model-Enhanced Graph Contrastive Learning (DiffGCL) to overcome these limitations and enhance graph representation learning for the first time. Specifically, a graph-specific diffusion module is designed to explicitly capture global structural and semantic patterns by controlled Gaussian noise injection and an attention-based graph denoising network. A GCL module focuses on capturing local discriminative information. Through integrating the diffusion model with GCL, a shared graph encoder can acquire both global and local structures and semantic information within the graph while efficiently removing noise, leading to significant performance enhancement. Experimental results on real-world datasets demonstrate the effectiveness of DiffGCL, showing that it outperforms state-of-the-art competitors in graph classification accuracy.
Loading