Rethinking Spectral Augmentation for Contrast-based Graph Self-Supervised Learning

Published: 16 Feb 2025, Last Modified: 16 Feb 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The recent surge in contrast-based graph self-supervised learning has prominently featured an intensified exploration of spectral cues. Spectral augmentation, which involves modifying a graph's spectral properties such as eigenvalues or eigenvectors, is widely believed to enhance model performance. However, an intriguing paradox emerges, as methods grounded in seemingly conflicting assumptions regarding the spectral domain demonstrate notable enhancements in learning performance. Through extensive empirical studies, we find that simple edge perturbations - random edge dropping for node-level and random edge adding for graph-level self-supervised learning - consistently yield comparable or superior performance while being significantly more computationally efficient. This suggests that the computational overhead of sophisticated spectral augmentations may not justify their practical benefits. Our theoretical analysis of the InfoNCE loss bounds for shallow GNNs further supports this observation. The proposed insights represent a significant leap forward in the field, potentially refining the understanding and implementation of graph self-supervised learning.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Jundong_Li2
Submission Number: 3815
Loading