Diffusion Models without Classifier-free Guidance

ICLR 2026 Conference Submission15888 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Models, Classifier-free guidance
TL;DR: This work proposes Model-guidance, removes Classifier-free guidance of diffusion models, and achieves SOTA performances.
Abstract: We introduce Model-guidance (MG), a novel training objective for diffusion models that addresses the limitations of the widely used Classifier-free Guidance (CFG). Our approach directly incorporates the posterior probability of conditions into training, allowing the model itself to act as an implicit classifier. MG is conceptually inspired by CFG yet remains simple and effective, serving as a plug-and-play module compatible with existing architectures. Our method significantly accelerates training and doubles inference speed by requiring only a single forward pass per denoising step. MG achieves generation quality on par with, or surpassing, state-of-the-art CFG-based diffusion models. Extensive experiments across multiple models and datasets demonstrate both the efficiency and scalability of our approach. Notably, MG achieves a state-of-the-art FID of 1.34 on the ImageNet 256 benchmark.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 15888
Loading