Keywords: graph neural networks, explainability, formal logic, expressivity
TL;DR: We consider graph neural networks with mean aggregation and non-negative weights, showing which monotonic rules they can learn and constructing sound explanatory rules.
Abstract: Graph neural networks (GNNs) are frequently used for knowledge graph completion.
Their black-box nature has motivated work that uses sound logical rules to explain predictions and characterise their expressivity.
However, despite the prevalence of GNNs that use mean as an aggregation function, explainability and expressivity results are lacking for them.
We consider GNNs with mean aggregation and non-negative weights (MAGNNs), proving the precise class of monotonic rules that can be sound for them, as well as providing a restricted fragment of first-order logic to explain any MAGNN prediction.
Our experiments show that restricting mean-aggregation GNNs to have non-negative weights yields comparable or improved performance on standard inductive benchmarks, that sound rules are obtained in practice, that insightful explanations can be generated in practice, and that the sound rules can expose issues in the trained models.
Supplementary Material:  zip
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 17938
Loading