When do Contrastive Word Alignments Improve Many-to-many Neural Machine Translation?Download PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=mwygbrMylCq
Paper Type: Short paper (up to four pages of content + unlimited references and appendices)
Abstract: Word alignment has proven to benefit many-to-many neural machine translation (NMT). However, high-quality ground-truth bilingual dictionaries were used for pre-editing in previous methods, which are unavailable for most language pairs. Meanwhile, the contrastive objective can implicitly utilize automatically learned word alignment, which has not been explored in many-to-many NMT. This work proposes a word-level contrastive objective to leverage word alignments for many-to-many NMT. Empirical results show that this leads to 0.8 BLEU gains for several language pairs. Analyses reveal that in many-to-many NMT, the encoder's sentence retrieval performance highly correlates with the translation quality, which explains when the proposed method impacts translation. This motivates future exploration for many-to-many NMT to improve the encoder's sentence retrieval performance.
Copyright Consent Signature (type Name Or NA If Not Transferrable): Zhuoyuan Mao
Copyright Consent Name And Address: Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan
0 Replies

Loading