Beyond Log-Concavity and Score Regularity: Improved Convergence Bounds for Score-Based Generative Models in W2-distance
TL;DR: This work presents a new framework for analyzing Score-based Generative Models (SGMs) in the W2-distance, relaxing strict assumptions like log-concavity and score regularity.
Abstract: Score-based Generative Models (SGMs) aim to sample from a target distribution by learning score functions using samples perturbed by Gaussian noise. Existing convergence bounds for SGMs in the $\mathcal{W}_2$-distance rely on stringent assumptions about the data distribution. In this work, we present a novel framework for analyzing $\mathcal{W}_2$-convergence in SGMs, significantly relaxing traditional assumptions such as log-concavity and score regularity. Leveraging the regularization properties of the Ornstein-Uhlenbeck (OU) process, we show that weak log-concavity of the data distribution evolves into log-concavity over time. This transition is rigorously quantified through a PDE-based analysis of the Hamilton-Jacobi-Bellman equation governing the log-density of the forward process. Moreover, we establish that the drift of the time-reversed OU process alternates between contractive and non-contractive regimes, reflecting the dynamics of concavity.
Our approach circumvents the need for stringent regularity conditions on the score function and its estimators, relying instead on milder, more practical assumptions. We demonstrate the wide applicability of this framework through explicit computations on Gaussian mixture models, illustrating its versatility and potential for broader classes of data distributions.
Lay Summary: This work improves how machines learn to generate realistic data—such as images or simulations—by studying a class of models known as score-based generative models. Previous approaches relied on strict assumptions about the data that often don’t hold in real-world scenarios. We show that these assumptions can be relaxed by leveraging the inherent properties of the algorithm, which naturally makes complex data easier to handle over time. This results in more flexible and applicable theoretical guarantees for these generative models.
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Score-Based Generative Models, Hamilton–Jacobi–Bellman equation, Log-concavity, Convergence Guarantees
Submission Number: 2633
Loading