Keywords: text diffusion models, non-autoregressive text generation
TL;DR: We develop a discrete diffusion language model that for the first time matches or outperforms autoregressive counterparts on generative quality as well as on reasoning and planning/search tasks
Abstract: We present a discrete diffusion-based generative model for text generation using Glauber dynamics from statistical physics. Our main insight is that instead of trying to train a discrete state space diffusion model using Glauber dynamics with a uniform transition kernel as the forward process, one can set up an ``energy function'' based on pretrained causal/masked language models, which, when viewed as the stationary distribution, allows us to significantly improve the quality of the generated text. Using UL2 as our pretrained models and modifying and incorporating it into our diffusion pipeline, we obtain significantly better perplexities than prior diffusion-based text generative models and are competitive with the perplexities of GPT-2-medium and GPT-2-large for comparable model sizes. Furthermore, our models outperform prior diffusion models and GPT-2 style auto-regressive models on some zero-shot common sense reasoning tasks as well as some planning/search tasks.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 22619
Loading