Discrete Interpolants: Unifying the Masked Generative and Discrete Diffusion Models

TMLR Paper6506 Authors

14 Nov 2025 (modified: 22 Nov 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In generative models, two paradigms have gained attraction in various applications: next-set prediction-based Masked Generative Models and next-noise prediction-based Non-Autoregressive Models, e.g., Diffusion Models. In this work, we propose using discrete-state models to connect them and explore their scalability in the vision domain. First, we conduct an in-depth analysis in a unified design space across two types of models including timestep-independence, noise schedule, temperature, guidance strength, etc in a scalable manner. Second, from the lens of generative models, we re-cast typical discriminative tasks, e.g., image segmentation, as an unmasking process from [MASK] tokens on a discrete-state model. This enables us to perform various sampling processes, including flexible conditional sampling by only training once to model the joint distribution. All aforementioned explorations lead to our framework named Discrete Interpolants, which enables us to achieve state-of-the-art or competitive performance compared to previous discrete-state based methods in various benchmarks, including ImageNet256, MS COCO, CC12M, as well as the video datasets FaceForensics and DMLab. In summary, by leveraging [MASK] in discrete-state models, we can bridge Masked Generative and Non-autoregressive Diffusion models, as well as generative and discriminative tasks. Our code will be released.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Mauricio_Delbracio1
Submission Number: 6506
Loading