Abstract: Graph Contrastive Learning (GCL) has achieved great success in self-supervised representation learning throughout positive and negative pairs based on graph neural networks (GNNs), where one critical issue lies in how to handle the false hard negatives that share the large similarity to the same referenced class as the anchor, which is critical to message passing of GNNs to exploit the graph structure. However, the existing arts either mistakenly identify or miss the false hard negatives, hence resulting into poor node representation. Building on this, there are several crucial bottlenecks – Where do false hard negatives exist upon the anchor? How to well seek false hard negatives? Whether are more false hard negatives better? To answer these questions, in this paper, we propose a novel Locally Weighted Graph Contrastive Learning method, named LocWGCL, while revealing that false hard negatives are primarily distributed in the first-order and second-order neighborhoods of the anchor. Benefiting from the tightness between the first-order nodes and the anchor, representation similarity is calculated to select false hard negatives. For the second-order case, false hard negatives are identified, such that they share the similar passed message with the anchor over the common first-order nodes, along with the large similarity. Upon the seeking process, we devise a weighted strategy to false hard negatives for better node representation. Empirical studies verify the advantages of LocWGCL over the state-of-the-arts on six benchmarks.
Loading