Keywords: Efficient LLM, LLM Inference, Speculative Decoding, Long-context Inference
TL;DR: We propose SpecExtend, a drop-in enhancement that improves the performance of speculative decoding on long sequences without additional training.
Abstract: Speculative decoding is a widely used technique for accelerating inference in large language models (LLMs), but its performance degrades as input length grows, with significant drops even at moderate lengths. Yet, this early degradation has remained largely underexplored. We introduce SpecExtend, a drop-in enhancement that improves speculative decoding on long sequences without additional training. SpecExtend integrates efficient attention mechanisms such as FlashAttention and Hybrid Tree Attention to accelerate prefill and verification steps. To improve both draft accuracy and speed on long inputs without retraining, we propose Cross-model Retrieval, a novel KV cache eviction strategy that leverages the target model’s attention scores to dynamically select relevant context for the smaller draft model. Extensive evaluations show that SpecExtend accelerates speculative decoding by up to 2.84× on 16K-token long summarization and up to 3.86× on long reasoning, while preserving the short-input performance of state-of-the-art frameworks.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 25252
Loading