MOSE-GNN: A Motif-Based Self-Explaining Graph Neural Network for Molecular Property Prediction

Published: 16 Nov 2024, Last Modified: 26 Nov 2024LoG 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Interpretability, Molecular property prediction
TL;DR: MOSE-GNN integrates motif importance scoring into the GNN architecture, offering global explanations and chemically meaningful explanations
Abstract: Graph Neural Networks (GNNs) have shown significant utility in molecular property prediction but lack interpretability. Most existing interpretability methods focus on instance-based explanations at the node or edge level. Such methods fail to provide a holistic understanding of how key molecular structures influence the model’s predictions. This underscores the need for a model-based approach that offers explanations in terms of crucial motifs and their impact on the model’s overall decision-making. To address this challenge, we introduce MOtif-based Self-Explaining GNN (MOSE-GNN), an ante-hoc method that integrates motif importance scoring into the GNN architecture. MOSE-GNN assigns global importance scores to predefined motifs, which are shared among instances and generated using RDKit’s BRICS Molecular Segmentation function. These scores determine the extent to which the model utilizes information from each motif to predict each class, serving as an explanation for the motif’s contributions to the class prediction. Our results on three classification tasks: mutagenicity, blood-brain barrier permeation, and cardiotoxicity demonstrate that MOSE-GNN generates meaningful motif importance scores without sacrificing predictive performance and, in some cases, even improves it.
Submission Type: Extended abstract (max 4 main pages).
Poster: png
Poster Preview: png
Submission Number: 159
Loading