Understanding Neural ODE prediction decision using SHAP

Published: 03 Nov 2023, Last Modified: 03 Jan 2024NLDL 2024EveryoneRevisionsBibTeX
Keywords: Neural ODEs, SHAP, explainability
TL;DR: This paper proposes an interpretation to neural ordinary differential equations (NODEs) for image classification using SHapley Additive exPlanations (SHAP), an explainable AI method.
Abstract: Neural ordinary differential equations (NODEs) have emerged as a powerful approach for modelling complex dynamic systems using continuous-time transformations. Although NODEs offer superior modelling capabilities, little research has been conducted on understanding the factors that contribute to their predictions on image datasets. In this paper, we propose the leveraging of SHapley Additive exPlanations (SHAP), which is an influential explainable artificial intelligence method, to gain insights into the NODEs prediction process. We enable the interpretable analysis of important pixels that contribute to the prediction decisions of NODEs by adapting SHAP to the continuous-time nature thereof. Experiments on synthetic datasets demonstrate the efficacy of our proposed approach in revealing the dynamics and important features that drive NODEs predictions. Our empirical findings provide insights into how NODEs determine important features and the distributions of the Shapley values of each class. The proposed integration of SHAP with NODEs contributes to the broader goal of enhancing transparency and trustworthiness in the application of continuous-time models to complex real-world systems.
Git: https://github.com/dlkphuong/NODEs-x-SHAP
Submission Number: 24
Loading