Attention Sinks Are Functionally Essential in Softmax Transformers: Theoretical Evidence

ACL ARR 2026 January Submission1155 Authors

28 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: attention sinks, softmax normalization, ReLU attention, trigger-conditional behavior, counterfactual/contrastive explanations
Abstract: Transformers often display an _attention sink_: probability mass concentrates on a fixed, content-agnostic position. We prove that computing a simple trigger-conditional behavior _necessarily_ induces a sink in softmax self-attention models. Our results formalize a familiar intuition: normalization over a probability simplex must force attention to collapse onto a stable anchor to realize a default state. We instantiate this with a concrete task: when a designated trigger token appears, the model must return the average of all preceding (non-BOS) token representations, motivated by the view that the trigger aggregates the content seen so far while the BOS token contains no input-dependent content. We also prove that non-normalized ReLU attention can solve the same task without any sink, confirming that the normalization constraint is the fundamental driver of sink behavior. Experiments validate our predictions and demonstrate they extend beyond the theoretically analyzed setting: softmax models develop strong sinks while ReLU attention eliminates them in both single-head and multi-head variants.
Paper Type: Short
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: counterfactual / contrastive explanations
Contribution Types: Theory
Languages Studied: English
Submission Number: 1155
Loading