Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In recent years, masked diffusion models (MDMs) have emerged as a promising alternative approach for generative modeling over discrete domains. Compared to autoregressive models (ARMs), MDMs trade off complexity at training time with flexibility at inference time. At training time, they must learn to solve an exponentially large number of infilling problems, but at inference time, they can decode tokens in essentially arbitrary order. In this work we closely examine these two competing effects. On the training front, we theoretically and empirically demonstrate that MDMs indeed train on computationally intractable subproblems compared to their autoregressive counterparts. On the inference front, we show that a suitable strategy for adaptively choosing the token decoding order significantly enhances the capabilities of MDMs, allowing them to sidestep hard subproblems. On logic puzzles like Sudoku, we show that adaptive inference can boost solving accuracy in pretrained MDMs from $<7$\% to $\approx 90$\%, even outperforming ARMs that were explicitly trained via teacher forcing to learn the right order of decoding.
Lay Summary: Standard language models write strictly left-to-right, while newer “masked diffusion” models can fill in blanks in any order—but so far, they lag. We pinpoint the bottleneck: during training, they face an exponential number of fill-in-the-mask subproblems, many of which are mathematically intractable, so learning gets hard. We found the flaw isn’t in the model itself, but in how we let it answer. At test time, we can choose which blank to reveal first, so we use a simple rule: pick the spot where the model is most confident. This one-line tweak catapults Sudoku accuracy from 7% to nearly 90% and brings similar leaps on Zebra puzzles and text-quality checks. Bottom line: training these models is tough, but smart decoding turns them into powerful, order-agnostic reasoners—no extra training required.
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Discrete Diffusion models, Masked Diffusion Models, Diffusion Models, Learning Theory, Inference-Time Strategy
Submission Number: 14095
Loading