AdaDecode: Accelerating LLM Decoding with Adaptive Layer Parallelism

Published: 01 May 2025, Last Modified: 23 Jul 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Built on the proposed adaptive layer parallelism, AdaDecode significantly accelerates autoregressive decoding while ensuring output parity, without the need for auxiliary models or any changes to original model parameters.
Abstract: Large language models (LLMs) are increasingly used for long-content generation (e.g., long Chain-of-Thought reasoning) where decoding efficiency becomes a critical bottleneck: Autoregressive decoding is inherently limited by its sequential token generation process, where each token must be generated before the next can be processed. This sequential dependency restricts the ability to fully leverage modern hardware’s parallel processing capabilities. Existing methods like speculative decoding and layer skipping offer potential speedups but have notable drawbacks: speculative decoding relies on an auxiliary “drafter” model, which can be challenging to acquire and increases memory overhead, while layer skipping may introduce discrepancies in the generated outputs due to the missing key-value cache at skipped layers. In this work, we propose AdaDecode, which accelerates LLM decoding without requiring auxiliary models or changes to the original model parameters, while ensuring output consistency. AdaDecode leverages the insight that many tokens—particularly simple or highly-predictable ones—can accurately be generated at intermediate layers, as further layers often do not significantly alter predictions once the model reaches a certain confidence. By adaptively generating tokens at intermediate layers when confidence is high, AdaDecode enables the next token’s computation to begin immediately. The remaining layer computations for early-predicted tokens are deferred and executed in parallel with subsequent tokens when needed, maximizing hardware utilization and reducing decoding latency. A final verification step ensures that early predictions match the results of standard autoregressive decoding, preserving output parity. Experiments across diverse generation tasks shows that AdaDecode consistently achieves superior decoding throughput compared to baselines with up to 1.73$\times$ speedup, while guaranteeing output parity with standard autoregressive decoding.
Lay Summary: Large language models (LLMs) are increasingly used to produce long, detailed texts such as Chain-of-Thought reasoning. However, generating this kind of content can be slow because LLMs typically produce one word at a time, and each word must be completed before the next begins. This sequential process restricts the ability to fully leverage modern computer hardware’s parallel processing capabilities. In this work, we present AdaDecode, a new method that speeds up text generation without changing the original model parameters or introducing extra auxiliary models. The idea is simple: if the model is confident about a word early on, we make early predictions using only part of the model, then start working on the next word immediately. Any unfinished computation is done in parallel later, followed by a verification step to ensure the output quality. This approach makes better use of hardware and significantly reduces generation time. Our experiments show that AdaDecode can make generation up to 1.73x faster while keeping the output exactly the same as standard generation.
Link To Code: https://github.com/weizhepei/AdaDecode
Primary Area: Deep Learning->Large Language Models
Keywords: Autoregressive model, efficient decoding, parallel token processing
Submission Number: 12478
Loading