Adaptive Pairwise Encodings for Link Prediction

24 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: graph neural networks, link prediction
Abstract: Link prediction is a common task on graph-structured data that has seen applications in a variety of domains. Classically, hand-crafted heuristics were used for this task. Heuristic measures are chosen such that they correlate well with the underlying factors related to link formation. In recent years, a new class of methods has emerged that combines the advantages of message-passing neural networks (MPNN) and heuristics methods. These methods perform predictions by using the output of an MPNN in conjunction with a "pairwise encoding" that encodes the relationship between nodes in the candidate link. These methods have been shown to achieve strong performance on numerous datasets. However, current pairwise encodings often contain a strong inductive bias, attempting to only model a limited subset of the possible underlying factors that exist between links. This limits the ability of existing methods to learn how to properly classify a variety of different links in the same graph. To address this limitation, we propose a new method, LPFormer, which attempts to adaptively learn the pairwise encodings for each link. LPFormer models the link factors via an attention module that learns the pairwise information that exists between nodes via the local and higher-order graph context. Extensive experiments demonstrate that LPFormer can achieve SOTA performance on numerous datasets while maintaining the efficiency.
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8644
Loading