Abstract: Self-supervised graph learning has attracted significant interest, especially graph contrastive learning. However, graph contrastive learning heavily relies on the choices of negative samples and the elaborate designs of architectures. Motivated by Barlow Twins, a method in computer vision, we propose a novel graph autoencoder named Core Barlow Graph Auto-Encoder(CBGAE) which does not rely on any special techniques, like predictor networks or momentum encoders. Meanwhile, we set a core view to make maximize agreement between the learned feature information. In contrast to the most existing graph contrastive learning models, it is negative-sample-free.
0 Replies
Loading