TL;DR: We propose Target Concrete Score Matching as a holistic framework for discrete diffusion modeling.
Abstract: Discrete diffusion is a promising framework for modeling and generating discrete data.
In this work, we present Target Concrete Score Matching (TCSM), a novel and versatile objective for training and fine-tuning discrete diffusion models.
TCSM provides a general framework with broad applicability. It supports pre-training discrete diffusion models directly from data samples, and many existing discrete diffusion approaches naturally emerge as special cases of our more general TCSM framework.
Furthermore, the same TCSM objective extends to post-training of discrete diffusion models, including fine-tuning using reward functions or preference data, and distillation of knowledge from pre-trained autoregressive models.
These new capabilities stem from the core idea of TCSM, estimating the concrete score of the target distribution, which resides in the original (clean) data space. This allows seamless integration with reward functions and pre-trained models, which inherently only operate in the clean data space rather than the noisy intermediate spaces of diffusion processes.
Our experiments on language modeling tasks demonstrate that TCSM matches or surpasses current methods. Additionally, TCSM is versatile, applicable to both pre-training and post-training scenarios, offering greater flexibility and sample efficiency.
Lay Summary: Discrete diffusion models are a promising approach for generating structured data like text. Our research introduces a new, comprehensive framework called Target Concrete Score Matching (TCSM) for training and adapting these models. The core idea is to train the model by focusing directly on the properties of the final, high-quality data, rather than on the noisy, intermediate steps inherent to the diffusion process. This perspective provides a single, principled objective that unifies various existing methods and, more importantly, can be used for both initial pre-training and subsequent post-training and distillation. Because our method operates in the "clean" data space, it allows for the seamless integration of external information, such as reward functions for task optimization or user preferences for alignment—a critical capability that was challenging for previous discrete diffusion methods. Our work establishes a more versatile and efficient foundation for developing these powerful discrete diffusion models.
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: discrete diffusion, concrete score, discrete flow matching
Submission Number: 1646
Loading