TL;DR: How SHAP can be used to explain RL models that predict disease progression
Abstract: We present a novel application of SHAP (SHapley Additive exPlanations) to enhance the interpretability of Reinforcement Learning (RL) models used for Alzheimer's Disease (AD) progression prediction. Leveraging RL's predictive capabilities on a subset of the ADNI dataset, we employ SHAP to explain the model's decision-making process. Our approach provides detailed insights into the key factors influencing AD progression predictions, offering both global and individual, patient-level interpretability. By bridging the gap between predictive power and transparency, our work is a step towards empowering clinicians and researchers to gain a deeper understanding of AD progression and facilitate more informed decision-making in AD-related research and patient care. To encourage further exploration, we open-source our codebase at https://github.com/rfali/xrlad.
Submission Track: Full Paper Track
Application Domain: Healthcare
Survey Question 1: We use SHAP to explain an RL model that predicts AD progression over 10 years. We make the RL model, formed via a reward function and domain knowledge, interpretable and explainable.
Survey Question 2: In healthcare applications, incorporating explainability is motivated by the need for transparency and trust in model predictions. Black-box methods can hinder clinical acceptance, limit insights into disease progression, and pose challenges in identifying biases, making explainable AI essential for informed and accountable decision-making.
Survey Question 3: We used SHAP, which is a very popular library. However, it has found very limited applicability with RL models (because of RL's traditionally large state space) and hence this work is novel in terms of showing application of SHAP on a logitudinal data (disease progression over a 10 year period).
Submission Number: 70
Loading