CreditDecoding: Accelerating Parallel Decoding in Diffusion Large Language Models with Trace Credits
Keywords: Diffusion Large Language Models, Parallel Decoding, Inference Acceleration, Training-Free Optimization
Abstract: Diffusion large language models (dLLMs) generate text through iterative denoising. In commonly adopted parallel decoding schemes, each step confirms only high-confidence positions while remasking the others.
By analyzing dLLM denoising traces, we uncover a key inefficiency: models often predict the correct target token several steps before its confidence becomes high enough to be decoded.
This gap between early prediction and late decoding forces repeated remasking of already-correct tokens, causing redundant iterations and limiting acceleration.
To exploit this temporal redundancy, we introduce Trace Credit to quantify a token's decoding potential by accumulating historical evidence.
Building on this, we propose CreditDecoding, a training-free parallel decoding method that fuses Trace Credit with current logits to boost the confidence of correct but underconfident tokens, thereby accelerating denoising and improving robustness.
On eight benchmarks, CreditDecoding achieves up to 5.48 times speedup with +0.48 accuracy on LLaDA-8B and consistently improves performance across diverse dLLM architectures and parameter scales.
It further scales to long contexts and remains orthogonal to mainstream inference optimizations, making it a practical and widely applicable solution.
Paper Type: Long
Research Area: LLM Efficiency
Research Area Keywords: LLM Efficiency, inference methods, efficient models
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 9780
Loading