Deep Neural Machine Translation Model Based on Simple Recurrent Units
Abstract: Attention-based neural machine translation models have become extremely popular, with an encoder-decoder framework to model translation as a sequence to sequence problem. In this paper, we replace the gated recurrent units in the classical encoder and decoder with the simple recurrent units (SRUs), and deepen the structure of the encoder and decoder by stacking network layers to improve the performance of neural machine translation model. We conducted experiments on the German-English and Uyghur-Chinese translation tasks. Experiment results show that the performance is significantly improved without extra training speed, especially with residual connections.
0 Replies
Loading