Fragment-Wise Interpretability in Graph Neural Networks via Molecule Decomposition and Contribution Analysis

ICLR 2026 Conference Submission21080 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: interpretability, graph neural networks, molecular property prediction, message passing neural networks
TL;DR: We introduce an interpretable graph neural network for molecular property prediction that decomposes predictions into sum of fragment contributions.
Abstract: Graph neural networks (GNNs) are widely used in the field of predicting molecular properties. However, their black box nature limits their use in critical areas like drug discovery. Moreover, existing explainability methods often fail to reliably quantify the contribution of individual atoms or substructures due to the message-passing dynamics, which entangle local representations with information from the entire graph. As a remedy, we propose SEAL (Substructure Explanation via Attribution Learning), an interpretable GNN that divides the molecular graph into chemically meaningful fragments and limits information flow between them. As a result, contributions of individual substructures reflect the true influence of chemical fragments on prediction. Experiments on both synthetic and real molecular benchmarks demonstrate that SEAL consistently outperforms existing methods and produces explanations that chemists judge to be more intuitive and trustworthy.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 21080
Loading