Towards Effective and Robust Graph Contrastive Learning With Graph Autoencoding

Published: 01 Jan 2024, Last Modified: 12 Feb 2025IEEE Trans. Knowl. Data Eng. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph contrastive learning (GCL) has become the de-facto approach to conducting self-supervised learning on graphs for its superior performance. However, non-semantic graph augmentation methods prevent it from achieving better performance, and it suffers from vulnerability to graph attacks. To deal with these problems, we propose AEGCL to leverage graph AutoEncoder in Graph Contrastive Learning which directly targets graph property reconstruction to boost GCL effectiveness and robustness. Specifically, AEGCL has two distinctive characteristics, (1) a novel adaptive augmentation strategy based on motif centrality is proposed, which leverages semantic significant higher-order graph property; (2) the original attributed graph is decoupled into feature graph and topology graph to extract their dedicated information, and a simple AttnFuse is proposed to combine the two augmented graphs and the two decoupled graphs. Graph autoencoder can thus be applied to the topology domain and raw attribute domain. Empirically, extensive experiments on benchmark graph datasets show that AEGCL outperforms existing baseline methods in terms of classification accuracy and robustness.
Loading