Free Draft-and-Verification: Toward Lossless Parallel Decoding for Diffusion Large Language Models

Published: 16 Oct 2025, Last Modified: 10 Nov 2025NeurIPS 2025 ER WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Large Language Models, efficient inference, fast sampling
TL;DR: We propose a novel fast sampling algorithm for DLLMs that achieves lossless parallel decoding without extra cost.
Abstract: Diffusion Large Language Models (DLLMs) have emerged as a new paradigm of language modeling beyond autoregressive next-token prediction. Thanks to their bidirectional attention mechanism, DLLMs are more capable of capturing the connection of context, and thus show unique advantages in challenges like the famous "reversal curse" or learning under data-constrained scenarios. On the other hand, taking advantage of their inherent modeling foundations, parallel decoding algorithms enable multi-token prediction per step for DLLMs, which can accelerate the inference to the next level. However, the high generation quality often requires a large number of decoding steps, which is usually equal to the sequence length, and parallel decoding brings inference speedup at the cost of non-negligible performance degradation. To overcome this challenge, we introduce **Free** **D**raft-**a**nd-**Ve**rification (**FreeDave**), a novel fast sampling algorithm tailored for DLLMs that achieves lossless parallel decoding. Specifically, we propose a pipeline of parallel-decoded candidate generation and verification, which is guaranteed to reproduce the same sequence generated by static decoding, without external modules, extra model forward calls, or any post-training stage. By extensive evaluations on math reasoning and code generation benchmarks across different DLLMs, FreeDave is proven to boost the inference throughput up to $3.78\times$ without performance degradation. Code is available at https://github.com/cychomatica/FreeDave.
Submission Number: 221
Loading