Evaluating Human Alignment and Model Faithfulness of LLM Rationale

ACL ARR 2024 June Submission1767 Authors

14 Jun 2024 (modified: 23 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We study how well large language models (LLMs) explain their generations with rationales -- a set of tokens extracted from the input texts that reflect the decision process of LLMs. We examine LLM rationales extracted with two methods: 1) attribution-based methods that use attention or gradients to locate important tokens, and 2) prompting-based methods that guide LLMs to extract rationales using prompts. Through extensive experiments, we show that prompting-based rationales align better with human-annotated rationales than attribution-based rationales, and demonstrate reasonable alignment with humans even when model performance is poor. We additionally find that the faithfulness limitations of prompting-based methods, which are identified in previous work, may be linked to their collapsed predictions. By fine-tuning these models on the corresponding datasets, both prompting and attribution methods demonstrate improved faithfulness. Our study sheds light on more rigorous and fair evaluations of LLM rationales, especially for prompting-based ones. Code and data will be released upon paper acceptance.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: explanation faithfulness, feature attribution,
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 1767
Loading