Keywords: Masked diffusion language models, continuous feedback, code generation
TL;DR: We present soft-masking, a new method that improves masked diffusion language models by blending mask tokens with predictions from previous iterations to better preserve context.
Abstract: Diffusion models have demonstrated strong potential in language modeling, offering various advantages over traditional autoregressive approaches.
Their ability to generate and revise entire responses in parallel enables faster generation and built-in self-correction mechanisms.
Most modern diffusion-based language models employ masked diffusion, where decoding involves iteratively processing masked tokens based on a binary decision: either retaining the mask or replacing it with the predicted token.
However, this binary choice discards valuable predictive information when the mask is retained.
To address this limitation, we introduce \textit{soft-masking (SM)}, a novel method that dynamically blends the embedding of the mask token with the embeddings of the top-$k$ predicted tokens from the previous decoding step, for each retained mask.
This provides the model with a more informative prior, preserving context from earlier computations and allowing partial information about masked tokens to propagate beyond a single step.
We propose a training methodology that adapts a pretrained masked diffusion language model to incorporate SM.
We demonstrate that continuing pretraining a 169M parameter model with SM leads to improved perplexity and MAUVE scores.
Furthermore, we finetune two state-of-the-art diffusion models, Dream-7B and Dream-Coder-7B, with SM.
SM consistently improves performance across multiple coding benchmarks, particularly in high-throughput settings.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 18632
Loading