Accelerating Discrete Diffusion Models with Parallel Sampling

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Generative Models, Discrete Diffusion models, Parallel Computing, Sampling
TL;DR: Parallel-in-time sampling acceleration for tau-leaping algorithm in discrete diffusion models
Abstract: Discrete diffusion models are widely used for learning and generating discrete distributions. As the generation process is inherently sequential, the acceleration of sampling is of significant importance. In this work, we parallelize the mainstream $\tau$-leaping algorithm for absorbing discrete diffusion in a Continuous-Time Markov Chain (CTMC) framework. By leveraging the continuous-time stochastic integral form of the $\tau$-leaping algorithm and the Picard iteration method, we achieve parallel-in-time sampling acceleration. We implement a predictor-corrector structure based on the Markov chain Monte Carlo (MCMC) method to control for additional errors and provide a proof of exponential convergence for our algorithm. We improve the overall time complexity of $\tau$-leaping from ${\mathcal{O}}(d^{2}\log^{2}d)$ to at most ${\mathcal{O}}(d^{3/2}\log^{5/2}d)$. In practice, our accelerated algorithm can achieve at most 5 to 8-fold sampling speedup over the traditional time-sequential $\tau$-leaping with respect to wall-clock time on convex, non-convex, and high-dimensional discrete distributions. Our research broadens the scope of leveraging discrete diffusion models in various challenging areas like molecular structure generation and Large Language Models (LLMs).
Supplementary Material: zip
Primary Area: generative models
Submission Number: 11425
Loading