Sequence to Sequence Transduction with Hard Monotonic AttentionDownload PDF

23 Apr 2024 (modified: 21 Jul 2022)Submitted to ICLR 2017Readers: Everyone
Abstract: We present a supervised sequence to sequence transduction model with a hard attention mechanism which combines the more traditional statistical alignment methods with the power of recurrent neural networks. We evaluate the model on the task of morphological inflection generation and show that it provides state of the art results in various setups compared to the previous neural and non-neural approaches. Eventually we present an analysis of the learned representations for both hard and soft attention models, shedding light on the features such models extract in order to solve the task.
TL;DR: Sequence to sequence learning with a hard attention mechanism that works better than soft attention models on monotonically aligned sequences
Conflicts: cs.biu.ac.il
Keywords: Natural language processing, Applications
12 Replies

Loading