Contrastive Conditioning for Assessing Disambiguation in MT: A Case Study of Distilled BiasDownload PDF

16 May 2021 (modified: 05 May 2023)ACL ARR 2021 May Blind SubmissionReaders: Everyone
TL;DR: When disambiguations are scored based on contrastive sources, distilled translation models are shown to overgeneralize.
Abstract: Lexical disambiguation is a major challenge for machine translation systems, especially if some senses of a word are trained less often than others. Identifying patterns of overgeneralization requires evaluation methods that are both reliable and scalable. We propose contrastive conditioning as a reference-free black-box method for detecting disambiguation errors. Specifically, we score the quality of a translation by conditioning on variants of the source that provide contrastive disambiguation cues. After validating our method, we apply it in a case study to perform a targeted evaluation of sequence-level knowledge distillation. By probing word sense disambiguation and translation of gendered occupation names, we show that distillation-trained models tend to overgeneralize more than other models with a comparable BLEU score. Contrastive conditioning thus highlights a side effect of distillation that is not fully captured by standard evaluation metrics. Code and data to reproduce our findings are publicly available.
Software: zip
Data: zip
Preprint: yes
Consent: yes
0 Replies

Loading