TL;DR: We can predict token states based on their similarity to the sink token, enabling more efficient LLM inference
Abstract: Attention mechanisms are central to the success of large language models (LLMs), enabling them to capture intricate token dependencies and implicitly assign importance to each token. Recent studies have revealed the sink token, which receives disproportionately high attention despite their limited semantic role. In this paper, we first expand the relationship between the sink token and other tokens, moving beyond attention to explore their similarity in hidden states, considering the layer depth. We observe that as the layers get deeper, the cosine similarity between the normalized hidden states of the sink token and those of other tokens increases, and that the normalized hidden states of the sink token exhibit negligible changes. These imply that other tokens consistently are directed toward the sink token throughout the layers. Next, we propose a dynamic token selection method, called OrthoRank, using these findings to select important tokens. Specifically, in a certain layer, we define token importance by the speed at which the token moves toward the sink token. This is converted into orthogonality with the sink token, meaning that tokens that are more orthogonal to the sink token are assigned greater importance. Finally, through extensive experiments, we demonstrated that our method results in lower perplexity and higher zero-shot accuracy compared to layer pruning methods at the same sparsity ratio with comparable throughput, while also achieving superior performance on LongBench.
Lay Summary: Large language models (LLMs) are powerful but slow, partly because they process every word equally, even when some no longer need it.
We observed that the first word in a sentence, though often meaningless, receives heavy attention. As the model goes deeper, other words start to behave like this first word, which stays mostly unchanged.
Inspired by this, we developed OrthoRank. It identifies which words are still actively changing during processing and updates only those. The rest are temporarily skipped to save time.
This simple idea speeds up AI models and reduces computation, while maintaining or even improving performance. OrthoRank works with many existing models and does not require retraining.
Primary Area: Deep Learning->Large Language Models
Keywords: large language model, attention sink, efficiency
Submission Number: 9271
Loading