Temperature-Centric Investigation of Speculative Decoding with Knowledge Distillation

ACL ARR 2024 June Submission2057 Authors

15 Jun 2024 (modified: 13 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Speculative decoding stands as a pivotal technique to expedite inference in autoregressive (large) language models. This method employs a smaller \textit{draft} model to speculate a block of tokens, which the \textit{target} model then evaluates for acceptance. Despite a wealth of studies aimed at increasing the efficiency of speculative decoding, the influence of generation configurations on the decoding process remains poorly understood, especially concerning decoding temperatures. This paper delves into the effects of decoding temperatures on speculative decoding’s efficacy. Beginning with knowledge distillation (KD), we first highlight the challenge of decoding at higher temperatures, and demonstrate KD in a consistent temperature setting could be a remedy. We also investigate the effects of out-of-domain testing sets with out-of-range temperatures. Building upon these findings, we take an initial step to further the speedup for speculative decoding, particularly in a high-temperature generation setting. Our work offers new insights into how generation configurations drastically affect the performance of speculative decoding, and underscores the need for developing methods that focus on diverse decoding configurations.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: speculative decoding, knowledge distillation, temperature
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 2057
Loading