Rethinking Feature Augmentation In Graph Contrastive Learning

Published: 21 Feb 2025, Last Modified: 21 Feb 2025RLGMSD 2024 TalkEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Contrastive Learning, Augmentation Strategies, Random Feature Masking (RFM).
TL;DR: Random Feature Masking (RFM), an effective and robust augmentation strategy for graph contrastive learning.
Abstract: Graph Contrastive Learning (GCL) has emerged as a powerful framework for graph representation learning. GCL typically employs separate masking strategies for edges and node features. However, the stochastic Masking Node Feature (MF) method, which masks a portion of the columns in the node feature matrix, results in irrecoverable feature information loss at high masking rates. In other words, MF harms the uniformity of representations. To address this, we introduce a novel augmentation strategy called Random Feature Masking (RFM) for GCL. Unlike MF, RFM applies random masking across the entire set of node features for each individual node. Experiments on three widely used datasets for node classification demonstrate that RFM enables GCL to outperform the MF method, achieving higher accuracy, and greater robustness, even at high masking rates (e.g., $0.7$, $0.8$, and $0.9$). Since RFM does not mask a fixed fraction of the entire node feature matrix, it inherently preserves more feature information. To our best knowledge, this is the first study to introduce and comprehensively evaluate Random Feature Masking in GCL.
Submission Number: 13
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview