Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Vocabulary Selection Strategies for Neural Machine Translation
Gurvan L'Hostis, David Grangier, Michael Auli
Nov 02, 2016 (modified: Nov 02, 2016)ICLR 2017 conference submissionreaders: everyone
Abstract:Classical translation models constrain the space of possible outputs by selecting a subset of translation rules based on the input sentence. Recent work on improving the efficiency of neural translation models adopted a similar strategy by restricting the output vocabulary to a subset of likely candidates given the source. In this paper we experiment with context and embedding-based selection methods and extend previous work by examining speed and accuracy trade-offs in more detail. We show that decoding time on CPUs can be reduced by up to 90% and training time by 25% on the WMT15 English-German and WMT16 English-Romanian tasks at the same or only negligible change in accuracy. This brings the time to decode with a state of the art neural translation system to just over 140 words per seconds on a single CPU core for English-German.
TL;DR:Neural machine translation can reach same accuracy with a 10x speedup by pruning the vocabulary prior to decoding.
Keywords:Natural language processing
Enter your feedback below and we'll get back to you as soon as possible.