Neural Machine Translation with Target-Attention ModelDownload PDFOpen Website

2020 (modified: 19 Aug 2022)IEICE Trans. Inf. Syst. 2020Readers: Everyone
Abstract: I>Attention mechanism</I>, which selectively focuses on source-side information to learn a context vector for generating target words, has been shown to be an effective method for neural machine translation (<B>NMT</B>). In fact, generating target words depends on not only the source-side information but also the target-side information. Although the vanilla NMT can acquire target-side information implicitly by recurrent neural networks (<B>RNN</B>), RNN cannot adequately capture the global relationship between target-side words. To solve this problem, this paper proposes a novel target-attention approach to capture this information, thus enhancing target word predictions in NMT. Specifically, we propose three variants of target-attention model to directly obtain the global relationship among target words: 1) a <I>forward target-attention model</I> that uses a target attention mechanism to incorporate previous historical target words into the prediction of the current target word; 2) a <I>reverse target-attention model</I> that adopts a reverse RNN model to obtain the entire reverse target words information, and then to combine with source context information to generate target sequence; 3) a <I>bidirectional target-attention model</I> that combines the forward target-attention model and reverse target-attention model together, which can make full use of target words to further improve the performance of NMT. Our methods can be integrated into both RNN based NMT and self-attention based NMT, and help NMT get global target-side information to improve translation performance. Experiments on the NIST Chinese-to-English and the WMT English-to-German translation tasks show that the proposed models achieve significant improvements over state-of-the-art baselines.
0 Replies

Loading