Keywords: transformer, attention, stick-breaking, softmax, length extrapolation
TL;DR: Using the stick-breaking process formulation as a replacement for softmax attention.
Abstract: The self-attention mechanism traditionally relies on the softmax operator, necessitating positional embeddings like RoPE, or position biases to account for token order.
But current methods using still face length generalisation challenges.
We investigate an alternative attention mechanism based on the stick-breaking process in larger scale settings.
The method works as follows: For each token before the current, we determine a break point, which represents the proportion of the stick, the weight of the attention, to allocate to the current token.
We repeat this on the remaining stick, until all tokens are allocated a weight, resulting in a sequence of attention weights.
This process naturally incorporates recency bias, which has linguistic motivations for grammar parsing (Shen et al., 2017).
We study the implications of replacing the conventional softmax-based attention mechanism with stick-breaking attention.
We then discuss implementation of numerically stable stick-breaking attention and adapt Flash Attention to accommodate this mechanism.
When used as a drop-in replacement for current softmax+RoPE attention systems, we find that stick-breaking attention performs competitively with current methods on length generalisation and downstream tasks.
Stick-breaking also performs well at length generalisation, allowing a model trained with $2^{11}$ context window to perform well at $2^{14}$ with perplexity improvements.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11727
Loading