Keywords: Diffusion Model, Diffusion Distillation, One-step Generation
TL;DR: This paper proposes an efficient and fast distillation method for diffusion models by introducing the convergence trajectory.
Abstract: Accelerating the sampling speed of diffusion models remains a significant challenge. Recent score distillation methods distill a heavy teacher model into a student generator to achieve one-step generation, which is optimized by calculating the difference between two score functions on the samples generated by the student model.
However, there is a score mismatch issue in the early stage of the score distillation process, since existing methods mainly focus on using the endpoint of pre-trained diffusion models as teacher models, overlooking the importance of the convergence trajectory between the student generator and the teacher model.
To address this issue, we extend the score distillation process by introducing the entire convergence trajectory of the teacher model and propose $\textbf{Dis}$tribution $\textbf{Back}$tracking Distillation ($\textbf{DisBack}$). DisBask is composed of two stages: $\textit{Degradation Recording}$ and $\textit{Distribution Backtracking}$.
$\textit{Degradation Recording}$ is designed to obtain the convergence trajectory by recording the degradation path from the pre-trained teacher model to the untrained student generator.
The degradation path implicitly represents the intermediate distributions between the teacher and the student, and its reverse can be viewed as the convergence trajectory from the student generator to the teacher model.
Then $\textit{Distribution Backtracking}$ trains the student generator to backtrack the intermediate distributions along the path to approximate the convergence trajectory of the teacher model.
Extensive experiments show that DisBack achieves faster and better convergence than the existing distillation method and achieves comparable or better generation performance, with an FID score of 1.38 on the ImageNet 64$\times$64 dataset.
DisBack is easy to implement and can be generalized to existing distillation methods to boost performance.
Supplementary Material: zip
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 886
Loading