Keywords: diffusion model, initial noise, uncertainty quantification
TL;DR: We find antithetic initial noise yields negatively correlated samples, which enables us to improve sample diversity and construct more accurate estimators.
Abstract: We initiate a systematic study of antithetic initial noise in diffusion models, discovering that pairing each noise sample with its negation consistently produces strong negative correlation. This universal phenomenon holds across datasets, model architectures, conditional and unconditional sampling, and even other generative models such as VAEs and Normalizing Flows. To explain it, we combine experiments and theory and propose a *symmetry conjecture* that the learned score function is approximately affine antisymmetric (odd symmetry up to a constant shift), supported by empirical evidence.
This negative correlation leads to substantially more reliable uncertainty quantification with up to 90% narrower confidence intervals. We demonstrate these gains on tasks including estimating downstream statistics and evaluating diffusion inverse solvers. We also provide extensions with randomized quasi-Monte Carlo noise designs for uncertainty quantification, and explore additional applications of the antithetic noise design to improve image editing and diversity. Our framework is training-free, model-agnostic, and adds no runtime overhead. Code is available at \url{https://anonymous.4open.science/r/Antithetic-Noise-in-Diffusion-Models-8B54}.
Primary Area: generative models
Submission Number: 14501
Loading