A Shapley-value Guided Rationale Editor for Rationale Learning

Published: 22 Jan 2025, Last Modified: 11 Mar 2025AISTATS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Rationale learning aims to automatically uncover the underlying explanations for NLP predictions. Previous studies in rationale learning mainly focus on the relevance of independent tokens with the predictions without considering their marginal contribution and the collective readability of extracted rationales. Through an empirical analysis, we argue that the sufficiency, informativeness, and readability of rationales are essential for explaining diverse end-task predictions. Accordingly, we propose Shapley-value Guided Rationale Editor (SHARE), an unsupervised approach that refines editable rationales while predicting task outcomes. SHARE extracts a sequence of tokens as a rationale, providing a collective explanation that is sufficient, informative, and readable. SHARE is highly adaptable for tasks like sentiment analysis, claim verification, and question answering, and can integrate seamlessly with various language models to provide explainability. Extensive experiments demonstrate its effectiveness in balancing sufficiency, informativeness, and readability across diverse applications. Our code and datasets are available at https://github.com/zixinK/SHARE.
Submission Number: 1814
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview