Parallel-in-Time Diffusion Model Sampling

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: diffusion, sampling, speed, inference
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We introduce parallel-in-time sampling for diffusion models.
Abstract: Diffusion models have emerged as a powerful new class of generative models that can produce high-quality data by reversing a stochastic differential equation. However, diffusion models have slow inference, as they require sampling one timestep at a time. To speed up inference, various samplers have been proposed in the past two years that use higher-order differential equation solvers, such as Heun’s method and Runge-Kutta methods. However, these methods still perform sampling sequentially, limiting their efficiency. In this paper, we propose a new method for parallel-in-time sampling of diffusion models, inspired by classical parallel-in-time integration techniques. Our method can be used with any pre-trained diffusion model without modifying its architecture or finetuning it. We show that our method can achieve significant speedups over sequential sampling across a range of diffusion models and datasets, while maintaining comparable or better sample quality. Our method can also be applied to other differential-equation-based generative models, such as continuous normalizing flows.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6134
Loading