Keywords: Large Language Model, Efficient Attention
Abstract: When reading books, humans focus primarily on the current page, flipping back to recap prior context only when necessary. Similarly, we demonstrate that Large Language Models (LLMs) can learn to dynamically determine when to attend to global context. We propose All-or-Here Attention (AHA), which utilizes a binary router per attention head to dynamically toggle between full attention and local sliding window attention for each token. Our results indicate that with a window size of 256 tokens, up to 93\% of the original full attention operations can be replaced by sliding window attention without performance loss. Furthermore, by evaluating AHA across various window sizes, we identify a long-tail distribution in context dependency, where the necessity for full attention decays rapidly as the local window expands. By decoupling local processing from global access, AHA reveals that full attention is largely redundant, and that efficient inference requires only on-demand access to the global context. Our code and model will be made publicly available.
Paper Type: Long
Research Area: LLM Efficiency
Research Area Keywords: Language Modeling, Efficient/Low-Resource Methods for NLP, Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, Approaches low compute settings-efficiency
Languages Studied: english
Submission Number: 4670
Loading