Towards Reliable Link Prediction with Robust Graph Information BottleneckDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Robust link prediction, Inherent edge noise, Graph representation learning
Abstract: Link prediction on graphs has achieved great success with the rise of deep graph learning. However, the potential robustness under the edge noise is less investigated. We reveal that the inherent edge noise that naturally perturbs both input topology and target label leads to severe performance degradation and representation collapse. Here, we propose an information-theory guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse. Different from the general information bottleneck, RGIB decouples and balances the mutual dependence among graph topology, edge label, and representation, building a new learning objective for robust representation. We also provide two implementations, RGIB-SSL and RGIB-REP, that benefit from different methodologies, i.e., self-supervised learning and data reparametrization, for indirect and direct data denoising, respectively. Extensive experiments on six benchmarks with various scenarios verify the effectiveness of the proposed RGIB.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
TL;DR: We provide an information-theory-guided principle and its two instantiations for robust link prediction under inherent edge noise.
43 Replies

Loading