Abstract: Self-supervised learning methods became a popular approach for graph representation learning because they do not rely on manual labels and offer better generalization. Contrastive methods based on mutual information maximization between augmented instances of the same object are widely used in self-supervised learning of representations. For graph-structured data, however, there are two obstacles to successfully utilizing these methods: the data augmentation strategy and training decoder for mutual information estimation between augmented representations of nodes, sub-graphs, or graphs. In this work, we propose a self-supervised graph representation learning algorithm, Graph Information Representation Learning (GIRL). GIRL does not require augmentations or a decoder for mutual information estimation. The algorithm is based on an alternative information metric, \textit{recoverability}, which is tightly related to mutual information but is less complicated when estimating. Our self-supervised algorithm consistently outperforms existing state-of-the-art contrast-based self-supervised methods by a large margin on a variety of datasets. In addition, we show how the recoverability can be used in a supervised setting to alleviate the effect of over-smoothing/squashing in deeper graph neural networks. The code to reproduce our experiments is available at https://github.com/Anonymous1252022/Recoverability
0 Replies
Loading