Keywords: Sparse attention, autoregressive, generative models
TL;DR: We propose a content-based sparse attention model and show improvements on language modeling and image generation.
Abstract: Self-attention has recently been adopted for a wide range of sequence modeling
problems. Despite its effectiveness, self-attention suffers quadratic compute and
memory requirements with respect to sequence length. Successful approaches to
reduce this complexity focused on attention to local sliding windows or a small
set of locations independent of content. Our work proposes to learn dynamic
sparse attention patterns that avoid allocating computation and memory to attend
to content unrelated to the query of interest. This work builds upon two lines of
research: it combines the modeling flexibility of prior work on content-based sparse
attention with the efficiency gains from approaches based on local, temporal sparse
attention. Our model, the Routing Transformer, endows self-attention with a sparse
routing module based on online k-means while reducing the overall complexity of
attention to O(n^{1.5}d) from O(n^2d) for sequence length n and hidden dimension
d. We show that our model outperforms comparable sparse attention models on
language modeling on Wikitext-103 (15.8 vs 18.3 perplexity) as well as on
image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers.
Code will be open-sourced on acceptance.
Code: http://open-sourced-on-acceptance.com
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:2003.05997/code)
Original Pdf: pdf
6 Replies
Loading