Keywords: Diffusion Probabilistic Models; Decoupled Diffusion Models
Abstract: This paper proposes decoupled diffusion models (DDMs), featuring a new diffusion paradigm that allows for high-quality (un)conditioned image generation in less than 10 function evaluations. In a nutshell, DDMs decouple the forward image-to-noise mapping into image-to-zero mapping and zero-to-noise mapping. Under this framework, we mathematically derive 1) the training objectives and %and mathematically show that DDMs learn noise and image components separately. 2) for reverse time the sampling formula based on an analytic transition probability which models image to zero transition. The former enables DDMs to learn noise and image components separately which simplifies learning. Importantly, because of the latter's analyticity in the zero-to-image sampling function, DDMs can avoid the ordinary differential equation based accelerators and instead naturally perform sampling with an arbitrary step size. Under the few function evaluation setup, DDMs experimentally yield very competitive performance compared with the state of the art in 1) unconditioned image generation, e.g., CIFAR-10 and CelebA-HQ-256 and 2) image-conditioned downstream tasks such as super-resolution, saliency detection, and image inpainting.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 678
Loading