Explaining with Contrastive Phrasal Highlighting: A Case Study in Assisting Humans to Detect Translation Differences

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Human-Centered NLP
Submission Track 2: Interpretability, Interactivity, and Analysis of Models for NLP
Keywords: explainability, human-centered evaluation, machine translation evaluation, cross-lingual semantics, contrastive highlights
TL;DR: We introduce an approach to explain the predictions of NLP models that compare two texts by showing what differences between the inputs led to a prediction. We show that it helps people detect meaning differences in human and machine translations.
Abstract: Explainable NLP techniques primarily explain by answering "Which tokens in the input are responsible for this prediction?". We argue that for NLP models that make predictions by comparing two input texts, it is more useful to explain by answering "What differences between the two inputs explain this prediction?". We introduce a technique to generate contrastive phrasal highlights that explain the predictions of a semantic divergence model via phrase alignment guided erasure. We show that the resulting highlights match human rationales of cross-lingual semantic differences better than popular post-hoc saliency techniques and that they successfully help people detect fine-grained meaning differences in human translations and critical machine translation errors.
Submission Number: 4261
Loading