Improving Discrete Diffusion Unmasking Policies Beyond Explicit Reference Policies

ICLR 2026 Conference Submission5629 Authors

15 Sept 2025 (modified: 22 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: discrete diffusion models, masked diffusion models, reinforcement learning
TL;DR: We learn a unmasking policy model for masked diffusion models via a KL-regularized MDP (GRPO) that comes with convergence and KL-tightening guarantees
Abstract: Masked diffusion models (MDMs) have recently emerged as a novel framework for language modeling. MDMs generate sentences by iteratively denoising masked sequences, filling in [MASK] tokens step by step. Although MDMs support any-order sampling, performance is highly sensitive to the choice of which position to unmask next. Prior work typically relies on rule-based schedules (e.g., max-confidence, max-margin), which provide ad hoc improvements. In contrast, we replace these heuristics with a learned scheduler. Specifically, we cast denoising as a KL–regularized Markov decision process (MDP) with an explicit reference policy and optimize a regularized objective that admits policy-improvement and convergence guarantees under standard assumptions. We prove that the optimized policy under this framework generates samples that more closely match the data distribution than heuristic schedules. Empirically, across four benchmarks, our learned policy consistently outperforms max-confidence: for example, on SUDOKU, where unmasking order is critical, it yields a 22% gain over random and a 12% gain over max-confidence.
Primary Area: generative models
Submission Number: 5629
Loading