Every Token Counts: Generalizing 16M Ultra-Long Context in Large Language Models

ACL ARR 2026 January Submission1456 Authors

30 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Long-context modeling; LLM; sparse attention; length generalization
Abstract: This work explores efficient ultra-long context modeling. We posit that an effective solution requires three fundamental properties: sparsity, random-access flexibility, and \textbf{length generalization. To achieve this, we leverage Hierarchical Sparse Attention (HSA), a novel attention mechanism that satisfies all three properties. We integrate HSA into the Transformer architecture to develop HSA-UltraLong, an 8B-parameter Mixture-of-Experts (MoE) model trained on over 8 trillion tokens. We rigorously evaluate the model across tasks with both in-domain and out-of-domain context lengths to validate its capabilities. Our model demonstrates comparable performance to full-attention baselines on in-domain sequence lengths. Crucially, it achieves over 90\% accuracy on most in-context retrieval tasks with contexts up to 512 times the pre-training context length. This work reports our findings and remaining issues throughout the experiments, offering insights for future research in ultra-long context modeling.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: sparse models, retrieval-augmented generation, pre-training
Contribution Types: Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 1456
Loading