DARE: Disentanglement-Augmented Rationale ExtractionDownload PDF

Published: 31 Oct 2022, Last Modified: 07 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Rationale Extraction, Disentanglement, Mutual Information
TL;DR: We propose a disentanglement-augmented rationale extraction method (DARE) which squeezes more information from the original input.
Abstract: Rationale extraction can be considered as a straightforward method of improving the model explainability, where rationales are a subsequence of the original inputs, and can be extracted to support the prediction results. Existing methods are mainly cascaded with the selector which extracts the rationale tokens, and the predictor which makes the prediction based on selected tokens. Since previous works fail to fully exploit the original input, where the information of non-selected tokens is ignored, in this paper, we propose a Disentanglement-Augmented Rationale Extraction (DARE) method, which encapsulates more information from the input to extract rationales. Specifically, it first disentangles the input into the rationale representations and the non-rationale ones, and then learns more comprehensive rationale representations for extracting by minimizing the mutual information (MI) between the two disentangled representations. Besides, to improve the performance of MI minimization, we develop a new MI estimator by exploring existing MI estimation methods. Extensive experimental results on three real-world datasets and simulation studies clearly validate the effectiveness of our proposed method. Code is released at https://github.com/yuelinan/DARE.
Supplementary Material: pdf
18 Replies