Keywords: Sparse Attention, Test Time Scaling, Long Generation
TL;DR: We propose Rectified Sparse Attention to achieve near-lossless long-sequence generation.
Abstract: Efficient long-sequence generation is a critical challenge for Large Language Models. While recent sparse decoding methods improve efficiency, they suffer from KV cache misalignment, where approximation errors accumulate and degrade generation quality. In this work, we propose Rectified Sparse Attention (ReSA), a simple yet effective method that combines block-sparse attention with periodic dense rectification. By refreshing the KV cache at fixed intervals using a dense forward pass, ReSA bounds error accumulation and preserves alignment with the pretraining distribution. Experiments across math reasoning, language modeling, and retrieval tasks demonstrate that ReSA achieves near-lossless generation quality with significantly improved efficiency. Notably, ReSA delivers up to 3.77x
end-to-end speedup under decoding at 256K sequence length, making it a practical solution for scalable long-context inference.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 5189
Loading