Graph Neural Networks with Information Anchors for Node Representation Learning

Published: 01 Jan 2022, Last Modified: 06 Feb 2025Mob. Networks Appl. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In the era of big data, the large-scale information network applications need to process and analyze increasingly complex graph structure relationships. However, traditional methods of representing network structures are difficult to reflect potential relationships between massive nodes. Graph Neural Network (GNN) based node representation learning is an emerging learning paradigm that embeds network nodes into a low dimensional vector space by retaining as much the information of network topology and node content as possible. However, existing GNN approaches ignore the distinction among the positions of nodes with similar topologies, which is usually crucial for many network prediction and classification tasks. In this paper, we propose a novel Graph Neural Network model based on information anchors, called A-GNN, where these anchors are defined as the important nodes that have a lot of interactive information with other ordinary nodes. In our model, the vectors obtained by the node representation learning contain the location information of ordinary nodes related to anchors. In A-GNN, we first designed the selection strategy of the set of anchors. Then the distance computation process was defined for any given target node to each anchor. Finally, we proposed the learning schema of a non-linear distance-weighted aggregation over the anchors. Therefore A-GNN can obtain global position information of all ordinary nodes relative to the anchors. Our proposed A-GNN is suitable for various network prediction tasks such as link prediction and node classification. We have conducted comparative experiments on five datasets. Experimental results show that A-GNN outperforms current state-of-the-art models.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview