Progressive Hard Negative Masking: From Global Uniformity to Local Tolerance

Published: 01 Jan 2023, Last Modified: 13 May 2025IEEE Trans. Knowl. Data Eng. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Unsupervised contrastive learning has recently become increasingly popular due to its amazing performance without the need for costly annotations. However, indiscriminate sampling of negative pairs is accompanied by the uniformity-tolerance dilemma, which is especially serious in node-level graph contrastive learning due to the smoothing property of graph convolutional operators. Previous negative mining strategies that either overly emphasize hard negatives or rely on precise distribution estimation can make minor improvements or even degrade the performance in such a case. In this article, we investigate the role of hard negatives in the uniformity-tolerance dilemma and propose a novel contrastive objective with a progressive hard negative masking scheme. The proposed objective, as an asymptotically-tightened lower bound of mutual information, is theoretically and empirically demonstrated to be capable of allowing higher local tolerance and stronger contrastive effects, thus leading to higher-quality embedding distributions and considerable performance improvement in downstream node classification tasks.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview