On Evaluating Explanation Utility for Human-AI Decision-Making in NLP

Published: 27 Oct 2023, Last Modified: 27 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: Guidelines for starting with human-centered evaluations of explanations grounded in NLP applications.
Abstract: Is explainability a false promise? This debate has emerged from the lack of consistent evidence that explanations help in situations they are introduced for. In NLP, the evidence is not only inconsistent but also scarce. While there is a clear need for more human-centered, application-grounded evaluations, it is less clear where NLP researchers should begin if they want to conduct them. To address this, we introduce evaluation guidelines established through an extensive review and meta-analysis of related work.
Submission Track: Full Paper Track
Application Domain: Natural Language Processing
Survey Question 1: We expect that our abstract is understandable to those who are not NLP experts. We highlight that the evidence that explanations help in situations they are introduced for is not only inconsistent but also scarce in NLP. Moreover, explaining tasks that involve text has unique challenges: features are a sequence of high-dimensional non-interpretable vectors; an arbitrary number of features; continuous representations of discrete inputs; explaining models with billions of parameters; pretrained models; and inherently interpretable instances (e.g., linear models, short decision trees) performing nowhere close to large language models.
Survey Question 2: This question does not apply to our paper since we do not focus on an isolated problem.
Survey Question 3: Our paper is a meta-analysis of prior work, so none.
Submission Number: 53
Loading