Semantic Diversity by Phonetics for Accurate and Robust Machine TranslationDownload PDF

Anonymous

28 May 2019 (modified: 31 Jul 2019)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Abstract: Neural Machine Translation (NMT) learns from examples, and thus often lacks robustness against noise. Previous work has shown that integrating noise into the training process is effective at improving such robustness, but this solution can be inefficient due to the exponential number of string perturbations, i.e., exponential in the number of words or characters. To robustify the translation input, we treat human phonetic interaction throughout history as a pre-compiled computational device. This device implements a many-to-one function that converts text into phonetics. To the best of our knowledge, we are the first in Machine Translation, to apply the phonetic algorithms Soundex, NYSIIS, and MetaPhone to foreign word/character sequences. We also apply another linguistic representation, the logogram inference, Wubi, for Chinese. To explain why phonetic encodings improve NMT, we introduce, quantify, and empirically verify our hypothesis: "one phonetic representation usually corresponds to words that are semantically diverse." Driven by our hypothesis, we simulate this "natural" phonetic device and introduce an artificial method called random clustering. We achieved significant and consistent improvements overall language pairs and datasets we experimented with: French-English, German-English, and Chinese-English in IWSLT'17, with up to nearly 2 BLEU points over the state-of-the-art. Moreover, our approaches are more robust than baselines when evaluated on unknown noisy or out-of-domain test sets, with up to about 5 BLEU point increase. Upon acceptance, all software source code and experiments will be available as Open Source.
Keywords: nmt, nlp, machine learning
0 Replies

Loading