Interpretable and Generalizable Graph Learning via Subgraph Multilinear Extension

Published: 04 Mar 2024, Last Modified: 07 May 2024MLGenX 2024 SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Interpretation, Graph Neural Networks, Out-of-Distribution Generalization, Multilinear Extension, Causality, Geometric Deep Learning
TL;DR: We develop theoretical foundations for the expressive power of Interpretable GNNs via multilinear extension, and design a provably more powerful Interpretable GNN.
Abstract: Interpretable graph neural networks (XGNNs) are widely adopted in scientific applications involving graph-structured data. Previous approaches predominantly adopt the attention-based mechanism to learn edge or node importance for extracting and making predictions with the interpretable subgraph. However, the representational properties and limitations of these methods remain inadequately explored. In this work, we present a theoretical framework that formulates interpretable subgraph learning with the multilinear extension of the subgraph distribution, which we term as subgraph multilinear extension (SubMT). Extracting the desired interpretable subgraph requires an accurate approximation of SubMT, yet we find that the existing XGNNs can have a huge gap in fitting SubMT. Consequently, the SubMT approximation failure will lead to the degenerated interpretability of the extracted subgraphs. To mitigate the issue, we design a new XGNN architecture called Graph Multilinear neT (GMT), which is provably more powerful in approximating SubMT. We empirically validate our theoretical findings on a number of graph classification benchmarks. The results demonstrate that GMT outperforms the state-of-the-art up to 10% in terms of both interpretability and generalizability measures.
Submission Number: 19
Loading