Keywords: Consistency Models, Diffusion Models, Image Generation
TL;DR: We train a Bidirectional Consistency Model for the fast sampling (i.e., generating image from noise) and its reversion (i.e., generating noise from image) .
Abstract: Diffusion models (DMs) are capable of generating remarkably high-quality samples by iteratively denoising a random vector, a process that corresponds to moving along the probability flow ordinary differential equation (PF ODE).
Interestingly, DMs can also invert an input image to noise by moving backward along the PF ODE, a key operation for downstream tasks such as interpolation and image editing.
However, the iterative nature of this process restricts its speed, hindering its broader application.
Recently, Consistency Models (CMs) have emerged to address this challenge by approximating the integral of the PF ODE, largely reducing the number of iterations.
Yet, the absence of an explicit ODE solver complicates the inversion process.
To resolve this, we introduce Bidirectional Consistency Model (BCM), which learns a *single* neural network that enables both *forward and backward* traversal along the PF ODE, efficiently unifying generation and inversion tasks within one framework.
We can train BCM from scratch or tune it using a pre-trained consistency model, which reduces the training cost and increases scalability.
We demonstrate that BCM enables one-step generation and inversion while also allowing the use of additional steps to enhance generation quality or reduce reconstruction error.
We further showcase BCM's capability in downstream tasks, such as interpolation, inpainting, and blind restoration of compressed images.
Notably, when the number of function evaluations (NFE) is constrained, BCM surpasses domain-specific restoration methods, such as I$^2$SB and Palette, in a fully zero-shot manner, offering an efficient alternative for inversion problems.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5016
Loading