Abstract: We investigate the theoretical foundations of classifier-free guidance (CFG). CFG is the dominant method of conditional sampling for text-to-image diffusion models, yet unlike other aspects of diffusion, it remains on shaky theoretical footing. In this paper, we show that CFG interacts differently with DDPM and DDIM, and neither sampler with CFG generates the gamma-powered distribution $p(x|c)^\gamma p(x)^{1-\gamma}$. Then, we clarify the behavior of CFG by showing that it is a kind of predictor-corrector method (Song et al. 2020) that alternates between denoising and sharpening, which we call predictor-corrector guidance (PCG). We prove that in the SDE limit, CFG is actually equivalent to combining a DDIM predictor for the conditional distribution together with a Langevin dynamics corrector for a gamma-powered distribution (with a carefully chosen gamma). Our work thus provides a lens to theoretically understand CFG by embedding it in a broader design space of principled sampling methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Changes highlighted in blue in the revision.
Second Revision (response to Reviewer AUGV):
* Clarified use of Bayes' Rule below Eq. 2
* Clarified notation in Eqs. 5,6,7
* Fixed types $x_0$ vs $x$ in Eq. 11 and 14
* Fixed typo "twice" vs "half" below Eqs. 12,13
First Revision (response to Reviewers Zore and tzBY):
* Edited wording about "misconceptions"
* Editing Figure 1 caption
* Removed section "Understanding CFG: The Big Picture"
* Fixed notation.
* Edited Remark 4.1
Assigned Action Editor: ~Kangwook_Lee1
Submission Number: 4779
Loading