Monotonic Chunkwise AttentionDownload PDF

15 Feb 2018 (modified: 21 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Sequence-to-sequence models with soft attention have been successfully applied to a wide variety of problems, but their decoding process incurs a quadratic time and space cost and is inapplicable to real-time sequence transduction. To address these issues, we propose Monotonic Chunkwise Attention (MoChA), which adaptively splits the input sequence into small chunks over which soft attention is computed. We show that models utilizing MoChA can be trained efficiently with standard backpropagation while allowing online and linear-time decoding at test time. When applied to online speech recognition, we obtain state-of-the-art results and match the performance of a model using an offline soft attention mechanism. In document summarization experiments where we do not expect monotonic alignments, we show significantly improved performance compared to a baseline monotonic attention-based model.
TL;DR: An online and linear-time attention mechanism that performs soft attention over adaptively-located chunks of the input sequence.
Keywords: attention, sequence-to-sequence, speech recognition, document summarization
Code: [![github](/images/github_icon.svg) craffel/mocha](https://github.com/craffel/mocha)
Data: [CNN/Daily Mail](https://paperswithcode.com/dataset/cnn-daily-mail-1)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1712.05382/code)
8 Replies

Loading