Bridging the Gap between Training and Inference for Neural Machine Translation (Extended Abstract)Open Website

2020 (modified: 17 Nov 2024)IJCAI 2020Readers: Everyone
Abstract: Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words. At training time, it predicts with the ground truth words as context while at inference it has to generate the entire sequence from scratch. This discrepancy of the fed context leads to error accumulation among the translation. Furthermore, word-level training requires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but reasonable translations. In this paper, we address these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence during training. Experimental results on NIST Chinese->English and WMT2014 English->German translation tasks demonstrate that our method can achieve significant improvements on multiple data sets compared to strong baselines.
0 Replies

Loading