PosS: Position Specialist Generates Better Draft for Speculative Decoding

ICLR 2026 Conference Submission13914 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Efficient Generation; Fast Generation; Speculative Decoding
Abstract: Speculative decoding accelerates Large Language Model (LLM) inference by using a small draft model to predict multiple tokens, and a large target model to verify these tokens in parallel. Recent studies leverage the hidden state of the target model to enhance draft model prediction accuracy. However, existing methods suffer from the degrading quality of draft token predictions at later positions, due to error accumulation in draft model generated features. In this paper, we propose Position Specialists (PosS), which consist of multiple position-specialized draft layers to generate tokens at assigned position(s). Position specialists greatly improve token acceptance rate at later positions per drafting round, as each specialist only needs to focus on handling a certain level of draft model feature deviation. Experiment results on six datasets demonstrate that \textbf{\method} effectively improves over baselines on average acceptance length and speed-up ratio.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 13914
Loading