Keywords: explainable artificial intelligence for chemistry, molecular machine learning, materials discovery, molecular fingerprints, feature importance, counterfactuals
TL;DR: This study compares explainable artificial intelligence (XAI) methods on chemical tasks, revealing significant differences in their resulting feature importance rankings.
Abstract: Machine learning models have become a powerful tool in materials science, accelerating the discovery process by accurately predicting molecular properties. However, the black-box nature of many of these models makes it difficult to understand or validate the reasoning behind their predictions. Consequently, explainable artificial intelligence (XAI) has emerged as a field of study aimed at making these models more transparent. This need for explainability is crucial in chemistry and materials science, as identified structure-property relationships can be utilized to guide the design of novel molecules. Yet, there is still a lack of XAI benchmarks focused on molecular data. As a preliminary step to address this gap, this study presents a comparative analysis of five selected XAI methods on molecular fingerprints, which are commonly used representations for property prediction tasks. Our results reveal significant discrepancies in the feature importance rankings generated by different XAI methods, demonstrating that the choice of explanation approach can introduce bias and alter scientific interpretation in the material discovery process.
Submission Track: Findings, Tools & Open Challenges
Submission Category: AI-Guided Design
Institution Location: Poznan, Poland
AI4Mat Journal Track: Yes
Submission Number: 101
Loading