A Comparable Study on Model Averaging, Ensembling and Reranking in NMTOpen Website

2018 (modified: 13 Jan 2022)NLPCC (2) 2018Readers: Everyone
Abstract: Neural machine translation has become a benchmark method in machine translation. Many novel structures and methods have been proposed to improve the translation quality. However, it is difficult to train and turn parameters. In this paper, we focus on decoding techniques that boost translation performance by utilizing existing models. We address the problem from three aspects—parameter, word and sentence level, corresponding to checkpoint averaging, model ensembling and candidates reranking which all do not need to retrain the model. Experimental results have shown that the proposed decoding approaches can significantly improve the performance over baseline model.
0 Replies

Loading