Keywords: diffusion language models, early termination, adaptive inference, training metadata, parameter importance through AdamW trajectory, LoRA, reasoning benchmarks
TL;DR: EDIT uses training-time metadata to enable early termination during inference in diffusion language models, reducing cost while maintaining or improving accuracy.
Abstract: Diffusion-based large language models (dLLMs) refine token generations through iterative denoising, but answers often stabilize before all steps complete. We propose EDIT (Early Diffusion Inference Termination), an inference-time criterion that adaptively stops denoising once sufficient reasoning stability relative to training-time reasoning is detected. EDIT monitors the alignment between token activations and a reasoning map derived from AdamW-aggregated LoRA updates captured during supervised fine-tuning (SFT). During training, optimization dynamics generate rich metadata about parameter importance that in prior methods is typically discarded upon model release. We preserve this information as a compact representation of learned reasoning pathways. During inference, alignment scores are converted to a distribution over the tokens already unmasked at the current denoising step, and convergence is detected when KL divergence between consecutive steps falls below a threshold on the matched unmasked (visible) tokens.
Across reasoning benchmarks, EDIT reduces diffusion steps by 11.8\% to 68.3\% while preserving or improving accuracy in most settings, with approximately 0.02\% storage overhead (about 1.5-2 MB for all QKV modules across 32 blocks in an 8 GB model).
By utilizing training-gradient dynamics, our work opens a new research direction for reducing dLLM inference time and cost.
Submission Number: 77
Loading