Resolving Lexical Ambiguity in English–Japanese Neural Machine TranslationDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Desk Rejected SubmissionReaders: Everyone
TL;DR: The paper solves a lexical ambiguity problem caused from homonym in neural translation by BERT.
Abstract: Lexical ambiguity, i.e., the presence of two or more meanings for a single word, is an inherent and challenging problem for machine translation systems. Even though the use of recurrent neural networks and attention mechanisms are expected to solve this problem, machine translation systems are not always able to correctly translate lexically ambiguous sentences. In this work, I attempt to resolve the problem of lexical ambiguity in English--Japanese neural machine translation systems by combining a pretrained Bidirectional Encoder Representations from Transformer (BERT) language model that can produce contextualized word embeddings and a Transformer translation model, which is a state-of-the-art architecture for the machine translation task. These two proposed architectures have been shown to be more effective in translating ambiguous sentences than a vanilla Transformer model and the Google Translate system. Furthermore, one of the proposed models, the Transformer_BERT-WE, achieves a higher BLEU score compared to the vanilla Transformer model in terms of general translation, which is concrete proof that the use of contextualized word embeddings from BERT can not only solve the problem of lexical ambiguity, but also boost the translation quality in general.
Keywords: Neural Machine Translation, Lexical Ambiguity, Transformer, BERT
Original Pdf: pdf
1 Reply

Loading