Towards Consistent Natural-Language Explanations via Explanation-Consistency Finetuning

ACL ARR 2024 April Submission184 Authors

15 Apr 2024 (modified: 02 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) often generate convincing, fluent explanations. However, different from humans, they often generate $\textit{inconsistent}$ explanations on different inputs. For example, an LLM may explain "$\textit{all birds can fly}$" when answering the question "$\textit{Can sparrows fly?}$" but meanwhile answer "$\textit{no}$" to the related question "$\textit{Can penguins fly?}$". Explanations should be consistent across related examples so that they allow humans to simulate the LLM's decision process on multiple examples. We propose $\textbf{explanation-consistency finetuning}$ (EC-finetuning), a method that adapts LLMs to generate more consistent natural-language explanations on related examples. EC-finetuning involves finetuning LLMs on synthetic data that is carefully constructed to contain consistent explanations. Across a variety of question-answering datasets in various domains, EC-finetuning yields a $\textbf{10.0}$% relative explanation consistency improvement on 4 finetuning datasets, and generalizes to 7 out-of-distribution datasets not seen during finetuning ($\textbf{+4.5}$% relative). We will make our code available for reproducibility.
Paper Type: Short
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: free-text/natural language explanations, counterfactual/contrastive explanations, explanation faithfulness, robustness
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 184
Loading