Explaining Explainers: Necessity and Sufficiency in Tabular Data

Published: 28 Oct 2023, Last Modified: 16 Nov 2023TRL @ NeurIPS 2023 PosterEveryoneRevisionsBibTeX
Keywords: Explainable AI, High dimensional data, Counterfactuals, Machine Learning, ML Algorithms, XAI, Classification, Trust in AI
TL;DR: With discrepancies among XAI programs; our novel approach quantifies necessity and sufficiency, aiding trustworthiness and expert correlation with domain knowledge while evaluating results from local explanation methods.
Abstract: In recent days, ML classifiers trained on tabular data are used to make efficient and fast decisions for various decision-making tasks. The lack of transparency in the decision-making processes of these models have led to the emergence of EXplainable AI (XAI). However, discrepancies exist among XAI programs, raising concerns about their accuracy. The notion of what an “important" and “relevant" feature is, is different for different explanation strategies. Thus grounding them using theoretically backed ideas of necessity and sufficiency can prove to be a reliable way to increase their trustworthiness. We propose a novel approach to quantify these two concepts in order to provide a means to explore which explanation method might be suitable for tasks involving the implementation of sparse high dimensional tabular datasets. Moreover, our global necessity and sufficiency scores aim to help experts to correlate their domain knowledge with our findings and also allow an extra basis for evaluation of the results provided by popular local explanation methods like LIME and SHAP.
Submission Number: 39
Loading