MLGLP: Multi-Scale Line-Graph Link Prediction based on Graph Neural Networks

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: link prediction, graph neural network, multi-scale graph, line graph, complex network.
Abstract:

This manuscript proposes a multi-scale link prediction approach based on Graph Neural Networks (GNNs). The proposed method - Multi-Scale Line-Graph Link Prediction (MLGLP) - learns the graph structure and extracts effective representative features of graph edges to address challenges such as information loss and handle multi-scale information. This approach utilizes embedding vectors generated by GNNs from enclosing subgraphs. While expanding GNN layers can capture more intricate relations, it often leads to overs-smoothing. To mitigate this issue, we propose constructing coarse-grained graphs at three distinct scales to uncover complex relations. To apply multi-scale subgraphs in GNNs without using pooling layers that lead to information loss, we convert each subgraph into a line-graph and reformulate the task as a node classification problem. The hierarchical structure facilitates exploration across various levels of abstraction, fostering deeper comprehension of the relationships and dependencies inherent within the graph. The proposed method is applied on link prediction problem, which can be modelled as a graph classification problem. We perform extensive experiments on several well-known benchmarks and compare the results with state-of-the-art link prediction methods. The experimental results demonstrate the superiority of our proposed model in terms of average precision and area under the curve.

Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9712
Loading