Predictive and Explanatory Uncertainties in Graph Neural Networks: A Case Study in Molecular Property Prediction

Published: 05 Nov 2025, Last Modified: 05 Nov 2025NLDL 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Uncertainty Quantification, Explainable AI, Graph Neural Networks, Molecular Property Prediction, Trusthworty AI
Abstract: Accurate molecular property prediction is a key challenge in fields such as drug discovery and materials science, where deep learning models offer promising solutions. However, the widespread use of these models is hindered by their lack of transparency and the difficulty in assessing the reliability of their predictions. In this study, we address these issues by integrating uncertainty quantification and explainable AI techniques to enhance the trustworthiness of graph neural networks for molecular property prediction. We focus on predicting two distinct properties: aqueous solubility and mutagenicity. By deriving explanations in the form of substructure masks, we obtain interpretable explanations in the form of chemically meaningful substructures that influence the model’s predictions. Additionally, we incorporate uncertainty quantification to evaluate the confidence of both the predictions and their explanations. Our results demonstrate that (1) predictive uncertainty scores correlate with the accuracy of the predictions for both tasks, (2) uncertainties in the explanations also correlate with prediction correctness, and (3) there is a weak to moderate correlation between the uncertainties in the predictions and those in the explanations. These findings highlight the potential of combining uncertainty quantification and explainability to improve the trustworthiness of molecular property prediction models.
Serve As Reviewer: ~Marisa_Wodrich1
Submission Number: 39
Loading