Reversible Column Disentangled Augmentation Tricks for Graph Contrastive Learning

Published: 08 Oct 2025, Last Modified: 01 Nov 2025IEEE Transactions on MultimediaEveryoneRevisionsCC BY-SA 4.0
Abstract: Graph contrastive learning (GCL) has garnered significant attention for its self-supervised graph representation learning without label information and excellent generalization to downstream tasks. However, data augmentation for graph structured data is more challenging than that for images. We argue that simple data augmentations for GCL may risk damaging the intrinsic structure of the graph or creating views that are not diverse enough. Additionally, typical layer-by-layer feature propagation processes compress or discard pretext task-irrelevant feature information, resulting in unstable and suboptimal performance for unaligned downstream tasks. In this paper, we propose a novel framework termed Rev-GCL, which aims to maintain multi-level graph semantics without information loss via reversible column disentangled model augmentation tricks. Specifically, we propose a multi-column network with reversible connections as our encoder, where all columns share the same structure and receive a copy of the input graph. The reversible connections between columns ensure lossless transmission, allowing representations to be gradually disentangled from low-level to high-level semantics. Based on this, we introduce two model augmentation tricks, random propagation and asymmetric column, to construct different sibling encoders. These methods generate diverse graph views that can filter out high-frequency noise in contrastive learning, thereby yielding more generalizable node feature representations. Extensive experiments on eight commonly benchmark datasets demonstrate that Rev-GCL consistently outperforms existing state-of-the-art methods in node classification, clustering and link prediction tasks.
Loading