Explaining Longitudinal Clinical Outcomes using Domain-Knowledge driven Intermediate Concepts

Published: 27 Oct 2023, Last Modified: 26 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: Self-explaining neural network that predicts domain-knowledge driven auxiliary clinical concepts and use them for both predicting and explaining the final clinical outcome.
Abstract: The black-box nature of complex deep learning models makes it challenging to explain the rationale behind model predictions to clinicians and healthcare providers. Most of the current explanation methods in healthcare provide explanations through feature importance scores, which identify clinical features that are important for prediction. For high-dimensional clinical data, using individual input features as units of explanations often leads to noisy explanations that are sensitive to input perturbations and less informative for clinical interpretation. In this work, we design a novel deep learning framework that predicts domain-knowledge driven intermediate high-level clinical concepts from input features and uses them as units of explanation. Our framework is self-explaining; relevance scores are generated for each concept to predict and explain in an end-to-end joint training scheme. We perform systematic experiments on a real-world electronic health records dataset to evaluate both the performance and explainability of the predicted clinical concepts.
Submission Track: Full Paper Track
Application Domain: Healthcare
Survey Question 1: Feature-based explanations are common for explaining deep learning models in healthcare. However, they are noisy and sensitive to input perturbations for high-dimensional EHR data. We learn clinical concepts which are supervised by domain knowledge, represent intermediate high-level features derived from input features, and are easier for clinical interpretation. Our model predicts longitudinal clinical outcomes using the learnt concepts as units of explanation. Our model is self-explaining in nature; relevance scores (importance) of each concept are simultaneously learnt in an end-to-end training scheme such that the model predicts and explains at the same time.
Survey Question 2: The black-box nature of complex deep learning models makes it challenging to explain the rationale behind model predictions to clinicians and healthcare providers. To ensure that clinicians and other end-users trust model predictions, it is important to understand why the model is making a certain prediction. Most of the current explanation methods in healthcare provide explanations through feature importance scores, which identify clinical features that are important for prediction. for high-dimensional clinical data, using individual input features as units of explanation often leads to noisy explanations that are sensitive to input perturbations and less informative for clinical interpretation. The above problems not only motivated us to understand the reasoning behind black box predictions of clinical outcomes but also to learn high-level clinical concepts from high-dimensional EHR data that can be clinically informative units of explanation.
Survey Question 3: We do not use any post-hoc explanation methods like LIME, SHAP, or GradCAM to explain the predictions of our model. Our model is self-explaining and can predict and explain at the same time. We learn both high-level intermediate clinical concepts and their corresponding relevance scores within the same architecture in an end-to-end approach. The relevance scores highlight the importance or contribution of each concept towards the final outcome.
Submission Number: 74
Loading