Maximizing Mutual Information Across Feature and Topology Views for Representing GraphsDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 06 Nov 2023IEEE Trans. Knowl. Data Eng. 2023Readers: Everyone
Abstract: Recently, maximizing mutual information has emerged as a powerful tool for unsupervised graph representation learning. Existing methods are typically effective in capturing graph information from the topology view but consistently ignore the node feature view. To circumvent this problem, we propose a novel method by exploiting mutual information maximization across feature and topology views. Specifically, we first construct the feature graph to capture the underlying structure of nodes in feature spaces by measuring the distance between pairs of nodes. Then we use a cross-view representation learning module to capture both local and global information content across feature and topology views on graphs. To model the information shared by the feature and topology spaces, we develop a common representation learning module by using mutual information maximization and reconstruction loss minimization. Here, minimizing reconstruction loss forces the model to learn the shared information of feature and topology spaces. To explicitly encourage diversity between graph representations from the same view, we also introduce a disagreement regularization to enlarge the distance between representations from the same view. Experiments on synthetic and real-world datasets demonstrate the effectiveness of integrating feature and topology views. In particular, compared with the previous supervised methods, the proposed method achieves comparable or even better performance under the unsupervised representation and linear evaluation protocol.
0 Replies

Loading