Revisiting Graph Neural Networks for Link PredictionDownload PDF

28 Sept 2020 (modified: 22 Oct 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Graph Neural Networks, Link Prediction
Abstract: Graph neural networks (GNNs) have achieved great success in recent years. Three most common applications include node classification, link prediction, and graph classification. While there is rich literature on node classification and graph classification, GNNs for link prediction is relatively less studied and less understood. Two representative classes of methods exist: GAE and SEAL. GAE (Graph Autoencoder) first uses a GNN to learn node embeddings for all nodes, and then aggregates the embeddings of the source and target nodes as their link representation. SEAL extracts a subgraph around the source and target nodes, labels the nodes in the subgraph, and then uses a GNN to learn a link representation from the labeled subgraph. In this paper, we thoroughly discuss the differences between these two classes of methods, and conclude that simply aggregating \textit{node} embeddings does not lead to effective \textit{link} representations, while learning from \textit{properly labeled subgraphs} around links provides highly expressive and generalizable link representations. Experiments on the recent large-scale OGB link prediction datasets show that SEAL has up to 195\% performance gains over GAE methods, achieving new state-of-the-art results on 3 out of 4 datasets.
One-sentence Summary: We showed that simply aggregating pairwise node embeddings learned by a GNN does not lead to effective link representations, and proposed a labeling trick to address this issue.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2010.16103/code)
Reviewed Version (pdf): https://openreview.net/references/pdf?id=uPhgKfmUUR
19 Replies

Loading