Improved Sample Complexity Bounds For Diffusion Model Training Without Empirical Risk Minimizer Access
Keywords: Diffusion Models
TL;DR: Obtain state of the art sample complexity bounds for diffusion models.
Abstract: Diffusion models have demonstrated state-of-the-art performance across vision, language, and scientific domains. Despite their empirical success, prior theoretical analyses of the sample complexity suffer from poor scaling with input data dimension or rely on unrealistic assumptions such as access to exact empirical risk minimizers. In this work, we provide a principled analysis of score estimation, establishing a sample complexity bound of ${\mathcal{O}}(\epsilon^{-4})$ Our approach leverages a structured decomposition of the score estimation error into statistical, approximation, and optimization errors, enabling us to eliminate the exponential dependence on neural network parameters that arises in prior analyses. It is the first such result that achieves sample complexity bounds without assuming access to the empirical risk minimizer of score function estimation loss.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 19128
Loading