Abstract: Unsupervised rationale extraction aims to extract text snippets to support model predictions without explicit rationale annotation.
Researchers have made many efforts to solve this task.
However, Previous works encode each aspect independently, ignoring their internal correlations.
Meanwhile, such a uni-aspect encoding model can only explain and predict one aspect of the text at a time, which limits its downstream applications.
In this paper, we propose a Multi-Aspect Rationale Extractor (MARE) to explain and predict multiple aspects simultaneously.
Concretely, we propose a Multi-Aspect Multi-Head Attention (MAMHA) mechanism based on hard deletion to encode multiple text chunks simultaneously.
Furthermore, multiple special tokens are prepended in front of the text with each corresponding to one certain aspect.
Finally, multi-task training is deployed to reduce the training overhead.
Experimental results on two unsupervised rationale extraction benchmarks show that MARE achieves state-of-the-art performance.
Ablation studies further demonstrate the effectiveness of our method.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Explainable AI, Pretrained Language Model, Selective Rationalization, Rationale Extraction
Contribution Types: Model analysis & interpretability, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 1499
Loading