Keywords: diffusion distillation, distribution matching distillation, optimal transport, image-to-image translation
TL;DR: We combine the Distribution Matching Distillation loss with the transport cost between generator's input and output for solving the unpaired image-to-image problem.
Abstract: Diffusion-based generative models achieve SOTA results in mode coverage and generation quality but suffer from inefficient sampling. Recently introduced diffusion distillation techniques approach this issue by transforming the original multi-step model into a one-step generator with approximately the same output distribution. Among these methods, Distribution Matching Distillation (DMD) offers a suitable framework for training general-form one-step generators, applicable beyond unconditional generation. In this paper, we propose a modification of DMD, called Regularized Distribution Matching Distillation (RDMD), which applies to the unpaired image-to-image (I2I) translation problem. To achieve this, we regularize the generator objective from DMD with the transport cost between its input and output. We validate the method's applicability in theory by establishing its connection with optimal transport. Moreover, we demonstrate its empirical performance in application to several translation tasks, including 2D examples and I2I between different image datasets, where it performs on par or better than multi-step diffusion baselines.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10914
Loading