Finding Structure-Property Relationships for Molecular Property Predictions with Globally Explainable AI

Published: 17 Jun 2024, Last Modified: 16 Jul 2024ML4LMS PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Explainable AI, Concept Explanations, Molecular Property Prediction
TL;DR: We use global concept explanability to extract general structure property relationships from graph neural networks.
Abstract: AI models have advanced various branches of society, industry, and science. In the domains of chemistry and material science, for example, graph neural networks have proven a useful tool for molecular and material property predictions. Despite their superior predictive performance, the inner workings of most complex AI models remain elusive to human observers. Methods of explainable AI (xAI) can be used to increase the transparency of these predictions to gain a better understanding not only of the model's behavior but also of the underlying task itself. We introduce a method to extract structure-property relationships for molecular property predictions from global concept explanations. By clustering in a latent space of subgraph embeddings, we discover molecules with similar subgraph motifs. For each cluster of similar substructures we can compute an average contribution towards the model's target prediction - therefore reconstructing the general structure-property relationships on which the model's decisions are based on. Finally, a language model can be prompted with all information about the observed structural motifs to provide a hypothesis for a causal explanation of the method. We validate our method on various synthetic and real-world graph property prediction tasks and find that it is able to reproduce known chemical rules of thumb.
Poster: pdf
Submission Number: 22
Loading