ADJUSTING THE INDUCTIVE BIAS OF DIFFUSION MODELS

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Diffusion Models, Generative Models
TL;DR: Improving FID for image generation by adjusting the inductive bias of diffusion models
Abstract: It has been found empirically that diffusion-based generative models strongly ben- efit from weighting the score-matching objective in the training process and from redirecting trajectories in the sampling process to closer match the training dis- tribution. Here we show that a beneficial loss weight arises naturally when the training objective is derived from first principles by enforcing detailed balance between the forward and the reverse diffusion trajectories. We find that deter- ministic sampling by diffusion models induces a strong bias, favoring features of some training examples while ignoring others. To correct for the strong sampling bias, we introduce an efficient and controllable rejection sampling approach. We achieve a new state-of-the-art FID of 1.42 for CIFAR-10 in a class-conditional setting.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7949
Loading