ExpLIMEable: An exploratory framework for LIME

Published: 27 Oct 2023, Last Modified: 07 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: We introduce ExpLIMEable, a tool to help improve the understanding of LIME and its sensitivity and robustness to parameter choices. Additionally, we propose a new method to select the perturbations in LIME performing dimensionality reduction.
Abstract: ExpLIMEable is a tool to enhance the comprehension of Local Interpretable Model-Agnostic Explanations (LIME), particularly within the realm of medical image analysis. LIME explanations often lack robustness due to variances in perturbation techniques and interpretable function choices. Powered by a convolutional neural network for brain MRI tumor classification, \textit{ExpLIMEable} seeks to mitigate these issues. This explainability tool allows users to tailor and explore the explanation space generated post hoc by different LIME parameters to gain deeper insights into the model's decision-making process, its sensitivity, and limitations. We introduce a novel dimension reduction step on the perturbations seeking to find more informative neighborhood spaces and extensive provenance tracking to support the user. This contribution ultimately aims to enhance the robustness of explanations, key in high-risk domains like healthcare.
Submission Track: Demo Track
Application Domain: Healthcare
Survey Question 1: We have developed a tool aimed at enhancing the comprehension of LIME, a widely-used explainable machine learning technique that offers local explanations of machine learning models to specific input data. LIME's instability with respect to its implementation parameters has been discussed and acknowledged in the literature, making it less robust and reliable. In our study, we present a tool that enables users to navigate the explanation space of LIME and identify robust parameter regions, specifically within the context of a healthcare problem—MRI brain tumor classification—where explainability is vital due to the high stakes involved.
Survey Question 2: Our focus was on comprehending an explainability method for healthcare applications, given its crucial role in high-risk domains like medicine where doctors will only rely on machine learning predictions they can understand. Identifying a sensitivity gap in certain methods, we selected LIME, a commonly used approach, to address and enhance its applicability in clinical settings by providing a better understanding. Additionally, we discuss the potential for extending this tool to other explainability methods.
Survey Question 3: We focus our work on LIME.
Submission Number: 12
Loading