Accelerating LLM Inference with Staged Speculative Decoding

Published: 20 Jun 2023, Last Modified: 16 Jul 2023ES-FoMO 2023 PosterEveryoneRevisionsBibTeX
Keywords: foundation model, large language model, inference, efficiency, edge, artificial intelligence, open source
TL;DR: Using stages of small models to anticipate a LLM's predictions, you can batch queries to it and reduce inference latencies by 3x.
Abstract: Recent advances with large language models (LLM) illustrate their diverse capabilities. We propose a novel algorithm, staged speculative decoding, to accelerate LLM inference in small-batch, on-device scenarios. We address the low arithmetic intensity of small-batch inference by improving upon previous work in speculative decoding. First, we restructure the speculative batch as a tree, which reduces generation costs and increases the expected tokens per batch. Second, we add a second stage of speculative decoding. Taken together, we reduce single-batch decoding latency by 3.16x with a 762M parameter GPT-2-L model while perfectly preserving output quality.
Submission Number: 47
Loading