Logical Satisfiability of Counterfactuals for Faithful Explanations in NLIDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=-N3_r8zLTlL
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: Evaluating an explanation's faithfulness is desired for many reasons such as trust, interpretability and diagnosing the sources of model's errors. In this work, which focuses on the NLI task, we introduce the methodology of Faithfulness-through-Counterfactuals, which first generates a counterfactual hypothesis based on the logical predicates expressed in the explanations, and then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic (i.e. if the new formula is logically satisfiable). In contrast to existing approaches, this does not require any explanations for training a separate verification model. We first validate the efficacy of automatic counterfactual hypothesis generation, leveraging on the few-shot priming paradigm. Next, we show that our proposed metric performs well compared to other metrics using simulatability studies as a proxy task for faithfulness. In addition, we conduct a sensitivity analysis to validate that our metric is sensitive to unfaithful explanations.
0 Replies

Loading