Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Sparse Attention, Attention Estimation, Linear Attention, Transformers, NLP
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We present SEA, a linear attention method performing sparse attention in linear time with a generated attention mask based on compressed attention matrix estimation, using kernel-based linear attention.
Abstract: The transformer architecture has driven breakthroughs in recent years on tasks
which require modeling pairwise relationships between sequential elements, as
is the case in natural language understanding. However, long seqeuences pose a
problem due to the quadratic complexity of the attention operation. Previous re-
search has aimed to lower the complexity by sparsifying or linearly approximating
the attention matrix. Yet, these approaches cannot straightforwardly distill knowl-
edge from a teacher’s attention matrix, and often require complete retraining from
scratch. Furthermore, previous sparse and linear approaches lose interpretability
if they cannot produce full attention matrices. To address these challenges, we
propose SEA: Sparse linear attention with an Estimated Attention mask. SEA
estimates the attention matrix with linear complexity via kernel-based linear at-
tention, then subsequently creates a sparse attention matrix with a top-k̂ selection
to perform a sparse attention operation. For language modeling tasks (Wikitext2),
previous linear and sparse attention methods show roughly two-fold worse per-
plexity scores over the quadratic OPT-1.3B baseline, while SEA achieves better
perplexity than OPT-1.3B, using roughly half the memory of OPT-1.3B. More-
over, SEA maintains an interpretable attention matrix and can utilize knowledge
distillation to lower the complexity of existing pretrained transformers. We be-
lieve that our work will have a large practical impact, as it opens the possibility of
running large transformers on resource-limited devices with less memory.
Code: https://github.com/gmlwns2000/sea-attention
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Primary Area: representation learning for computer vision, audio, language, and other modalities
Submission Number: 230
Loading