Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Unsupervised Pretraining for Sequence to Sequence Learning
Prajit Ramachandran, Peter J. Liu, Quoc V. Le
Nov 04, 2016 (modified: Dec 27, 2016)ICLR 2017 conference submissionreaders: everyone
Abstract:This work presents a general unsupervised learning method to improve
the accuracy of sequence to sequence (seq2seq) models. In our method, the
weights of the encoder and decoder of a seq2seq model are initialized
with the pretrained weights of two language models and then
fine-tuned with labeled data. We apply this method to
challenging benchmarks in machine translation and abstractive
summarization and find that it significantly improves the subsequent
supervised models. Our main result is that the pretraining
accelerates training and improves generalization of seq2seq models,
achieving state-of-the-art results on the WMT
English->German task, surpassing a range of methods using
both phrase-based machine translation and neural machine
translation. Our method achieves an improvement of 1.3 BLEU from the
previous best models on both WMT'14 and WMT'15
English->German. On summarization, our method beats
the supervised learning baseline.
TL;DR:Pretraining seq2seq models gives large gains in both generalization and optimization on a variety of tasks.
Keywords:Natural language processing, Deep learning, Semi-Supervised Learning, Transfer Learning
Enter your feedback below and we'll get back to you as soon as possible.