Enhancing Mutual Information Estimation in Self-Interpretable Graph Neural Networks

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: graph neural networks, information bottleneck, interpretability
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: A self-interpretable graph learning framework based on the information bottleneck principle with a new scheme of restricting mutul information between graph random variables
Abstract: Graph neural networks (GNNs) with self-interpretability are pivotal in various high-stakes and scientific domains. The information bottleneck (IB) principle holds promise to infuse GNNs with inherent interpretability. In particular, the graph information bottleneck (GIB) framework identifies key subgraphs from the input graph $G$ that have high mutual information (MI) with the predictions while maintaining minimum MI with $G$. The major challenge is dealing with irregular graph structures and gauging the conditional probabilities for evaluating MI between these subgraphs and $G$. Existing methods for estimating the MI between graphs often present distorted and loose estimations, thereby undermining model efficacy. In this work, we propose a novel framework GEMINI for training self-interpretable graph models, which tackles the key challenge of graph MI estimations. We construct a variational distribution over critical subgraphs, based on which an efficient MI upper bound estimator for graphs is built. Besides the proposed theoretical framework, we devise a practical instantiation of different modules in GEMINI. We compare GEMINI thoroughly with both self-interpretable GNNs and post-hoc explanation methods on eight datasets with both interpretation and prediction performance metrics. Results reveal that GEMINI outperforms state-of-the-art self-interpretable GNNs on interpretability and achieves comparable prediction performance compared with mainstream GNNs.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5483
Loading