Enhancing Interpretability in Molecular Property Prediction with Contextual Explanations of Molecular Graphical Depictions

Published: 01 Jan 2024, Last Modified: 08 Oct 2024AIDD@ICANN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The field of explainable AI applied to molecular property prediction models has often been reduced to deriving atomic contributions. This has impaired the interpretability of such models, as chemists rather think in terms of larger, chemically meaningful structures, which often do not simply reduce to the sum of their atomic constituents. We develop an explanatory strategy yielding both local as well as more complex structural attributions. We derive such contextual explanations in pixel space, exploiting the property that a molecule is not merely encoded through a collection of atoms and bonds, as is the case for string- or graph-based approaches. We provide evidence that the proposed explanation method satisfies desirable properties, namely sparsity and invariance with respect to the molecule’s symmetries, to a larger degree that the SMILES-based counterpart model. Nonetheless, they correlate as expected with these string-based explanation as well as with ground truths, when available. Contextual explanations thus maintain the accuracy of the original explanations while improving their interpretability.
Loading