Exploiting Target Language Data for Neural Machine Translation Beyond Back TranslationDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Neural Machine Translation (NMT) suffers from the challenges of translating in new domains and low-resource languages. To address these challenges, researchers have proposed methods to incorporate additional knowledge into NMT, including the integration of translation memories (TMs). However, finding TMs that closely match the input sentence remains difficult, particularly for specific domains. In contrast, monolingual data is widely available in most languages and back-translation is believed as a promising method to utilize target language data. But, it still needs additional training. In this paper, we propose Pseudo-kNN-MT, a method that exploit target language data during the inference phase, without training the NMT model. Also, we further investigate the assistance of large language model (LLM) in NMT. Experimental results show that our method can improve translation quality by a great margin. Interestingly, LLMs are found to be helpful for strong NMT systems.
Paper Type: long
Research Area: Machine Translation
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings
Languages Studied: German, English, Czech, Icelandic
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview