Keywords: Diffusion Language Models, Efficiency, Sampling, Masked Diffusion Models, Discrete Flow Models, Reasoning
TL;DR: We propose an adaptive multi-token unmasking sampler for masked language diffusion models, gaining speed up of 2-3x on code generation and math reasoning benchmarks without loss in accuracy.
Abstract: Recent masked diffusion models (MDMs) have shown competitive performance compared to autoregressive models (ARMs) for language modeling. While most literature has focused on performance enhancing sampling procedures, efficient sampling from MDMs has been scarcely explored. We make the observation that often a given sequence of partially masked tokens determines the values of multiple unknown tokens deterministically, meaning that a single prediction of a masked model holds additional information unused by standard sampling procedures.
Based on this observation, we introduce *EB-Sampler*, a simple drop-in replacement for existing samplers, utilizing an **E**ntropy **B**ounded unmasking procedure that dynamically unmasks multiple tokens in one function evaluation with predefined approximate error tolerance. We formulate the EB-Sampler as part of a broad family of adaptive samplers for which we provide an error analysis that motivates our algorithmic choices. EB-Sampler accelerates sampling from current state of the art MDMs by roughly 2-3x on standard coding and math reasoning benchmarks without loss in performance. We also validate the same procedure works well on smaller reasoning tasks including maze navigation and sudoku, tasks ARMs often struggle with.
Supplementary Material: zip
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 13577
Loading