Reconsidering Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs

Published: 22 Jan 2025, Last Modified: 22 Feb 2025ICLR 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainability, Trustworthiness, Faithfulness, Self-Explainable GNNs
TL;DR: What is faithfulness exactly? We provide an answer to this question by providing a unified view of existing metrics, studying its role in self-explainable and domain-invariant modular GNNs.
Abstract: As Graph Neural Networks (GNNs) become more pervasive, it becomes paramount to build reliable tools for explaining their predictions. A core desideratum is that explanations are *faithful*, i.e., that they portray an accurate picture of the GNN's reasoning process. However, a number of different faithfulness metrics exist, begging the question of what is faithfulness exactly and how to achieve it. We make three key contributions. We begin by showing that *existing metrics are not interchangeable* -- i.e., explanations attaining high faithfulness according to one metric may be unfaithful according to others -- and can *systematically ignore important properties of explanations*. We proceed to show that, surprisingly, *optimizing for faithfulness is not always a sensible design goal*. Specifically, we prove that for injective regular GNN architectures, perfectly faithful explanations are completely uninformative. This does not apply to modular GNNs, such as self-explainable and domain-invariant architectures, prompting us to study the relationship between architectural choices and faithfulness. Finally, we show that *faithfulness is tightly linked to out-of-distribution generalization*, in that simply ensuring that a GNN can correctly recognize the domain-invariant subgraph, as prescribed by the literature, does not guarantee that it is invariant unless this subgraph is also faithful. All our code can be found in the supplementary material.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6456
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview