Abstract: Efficient long-sequence generation is a critical challenge for Large Language Models. While recent sparse decoding methods improve efficiency, they suffer from KV cache misalignment, where approximation errors accumulate and degrade generation quality. In this work, we propose Rectified Sparse Attention (ReSA), a simple yet effective method that combines block-sparse attention with periodic dense rectification. By refreshing the KV cache at fixed intervals using a dense forward pass, ReSA bounds error accumulation and preserves alignment with the pretraining distribution. Experiments across math reasoning, language modeling, and retrieval tasks demonstrate that ReSA achieves near-lossless generation quality with significantly improved efficiency. Notably, ReSA delivers up to 2.42$\times$ end-to-end speedup under decoding at 256K sequence length, making it a practical solution for scalable long-context inference.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: Sparse Attention,Test Time Scaling,Long Generation
Contribution Types: Approaches low compute settings-efficiency
Languages Studied: English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
Software: zip
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: No
A2 Elaboration: It is a machine-learning algorithm, which does not have direct risks.
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Artifacts are cited appropriately.
B2 Discuss The License For Artifacts: N/A
B3 Artifact Use Consistent With Intended Use: N/A
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: Yes
B5 Elaboration: Section 3.1 and 3.2
B6 Statistics For Data: Yes
B6 Elaboration: Section 3.1 and 3.2
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Section 3.1 and 3.2
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Section 3.1 and 3.2
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 3.1 and 3.2
C4 Parameters For Packages: Yes
C4 Elaboration: Section 3.1 and 3.2
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: N/A
E1 Elaboration: We use LLM to enhance the paper writting.
Author Submission Checklist: yes
Submission Number: 1078
Loading