SpecExtend: A Drop-in Enhancement for Speculative Decoding of Long Sequences

ACL ARR 2025 May Submission7408 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Speculative decoding is a widely adopted technique for accelerating inference in large language models (LLMs), but its performance degrades on long inputs due to increased attention cost and reduced draft accuracy. We introduce SpecExtend, a drop-in enhancement that improves the performance of speculative decoding on long sequences without any additional training. SpecExtend integrates efficient attention mechanisms such as FlashAttention and Hybrid Tree Attention into both the draft and target models, reducing latency across all stages. To improve draft accuracy, we propose Cross-model Retrieval, a novel cache update strategy that uses the target model's attention scores to dynamically select relevant context for the draft model. Extensive evaluations on three long-context understanding datasets show that SpecExtend improves standard tree-based speculative decoding by up to 2.29x for inputs up to 16K tokens, providing an effective solution for speculative decoding of long sequences.
Paper Type: Short
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: LLM Efficiency, NLP in resource-constrained settings
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 7408
Loading