Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Sequence to Sequence Transduction with Hard Monotonic Attention
Roee Aharoni, Yoav Goldberg
Nov 04, 2016 (modified: Dec 04, 2016)ICLR 2017 conference submissionreaders: everyone
Abstract:We present a supervised sequence to sequence transduction model with a hard attention mechanism which combines the more traditional statistical alignment methods with the power of recurrent neural networks. We evaluate the model on the task of morphological inflection generation and show that it provides state of the art results in various setups compared to the previous neural and non-neural approaches. Eventually we present an analysis of the learned representations for both hard and soft attention models, shedding light on the features such models extract in order to solve the task.
TL;DR:Sequence to sequence learning with a hard attention mechanism that works better than soft attention models on monotonically aligned sequences
Keywords:Natural language processing, Applications
Enter your feedback below and we'll get back to you as soon as possible.