Abstract: Graph representation learning aims to convert the graph-structured data into a low dimensional space in which the graph structural information and graph properties are maximumly preserved. Graph Neural Networks (GNN)-based methods have shown to be effective in dealing with the graph representation learning task. However, most GNN-based methods belong to supervised learning, which depends heavily on the data labels that are difficult to access in real-world scenarios. In addition, the inherent incompleteness in data will further degrade the performance of GNN-based models. In this paper, we propose a novel self-supervised graph representation learning model with variational inference. First, we strengthen the semantic relation between node and graph level in a self-supervised manner to alleviate the issue of over-dependence on data labels. Second, we utilize the variational inference technique to capture the general pattern underlying the data, thus guaranteeing the model robustness under some data missing circumstances. Extensive experiments on three widely used citation network datasets show that our proposed method has achieved or matched state-of-the-art results on link prediction and node classification tasks.
0 Replies
Loading