Test-Time Anchoring for Discrete Diffusion Posterior Sampling

02 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: inverse problems, generative modeling, discrete diffusion, masked diffusion, image editing
TL;DR: We introduce Anchored Posterior Sampling (APS) for masked diffusion foundation models, built on two key innovations: (1) quantized expectation for gradient-like guidance in discrete embedding space, and (2) anchored remasking for adaptive decoding.
Abstract: We study the problem of posterior sampling using pretrained discrete diffusion foundation models, aiming to recover images from noisy measurements without retraining task-specific models. While diffusion models have achieved remarkable success in generative modeling, most advances rely on continuous Gaussian diffusion. In contrast, discrete diffusion offers a unified framework for jointly modeling categorical data such as text and images. Beyond unification, discrete diffusion provides faster inference, finer control, and principled training-free Bayesian inference, making it particularly well-suited for posterior sampling. However, existing approaches to discrete diffusion posterior sampling face severe challenges: derivative-free guidance yields sparse signals, continuous relaxations limit applicability, and split Gibbs samplers suffer from the curse of dimensionality. To overcome these limitations, we introduce **Anchored Posterior Sampling (APS)** for *masked diffusion* foundation models, built on two key innovations---*quantized expectation* for gradient-like guidance in discrete embedding space, and *anchored remasking* for adaptive decoding. Our approach achieves state-of-the-art performance among discrete diffusion samplers across linear and nonlinear inverse problems on the standard benchmarks.
Primary Area: generative models
Submission Number: 1087
Loading