CLE: Context-Aware Local Explanations for High Dimensional Tabular Data

Published: 2024, Last Modified: 10 Nov 2025ICMLA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Explainable artificial intelligence (XAI) seeks to enhance the transparency, interpretability, and trustworthiness of AI models. One solution strategy for explaining complex AI models for high-dimensional tabular data is to approximate them locally using surrogate models. Surrogate models such as linear regression and decision trees are inherently interpretable and can be used as an explanation. However, it is challenging for linear regression and decision trees to provide meaningful explanations for data points far from the decision boundary. In this paper, we propose a framework that provides Context-aware Local Explanations for high-dimensional tabular data called CLE. We observe that the quality of explanations from different local models varies depending on the data point. The CLE framework uses the context around a data point to select the type of symbolic explanation. Moreover, we propose to utilize feature attributions to explain data points that are far from the decision boundary. The proposed method is evaluated using high-dimensional tabular datasets from the domains of power systems, breast cancer detection, heart disease detection, and website phishing detection. The experimental results show that CLE can provide meaningful local explanations for data points far from the decision boundary. The framework explains data points using three different types of local models and demonstrates a smooth trade-off between explanation accuracy and interpretability. It can be observed that a relatively simple decision tree can explain a data point with 92.31 % accuracy.
Loading