Keywords: Generative models, Diffusion models, Proximal operators, Backward discretization
TL;DR: This paper introduces Proximal Diffusion Models (ProxDM), derived from backward discretization of an SDE and based on learned proximal operators, achieving provable faster sampling complexity and empirically much faster convergence.
Abstract: Diffusion models have quickly become some of the most popular and powerful generative models for high-dimensional data. The key insight that enabled their development was the realization that access to the score---the gradient of the log-density at different noise levels---allows for sampling from data distributions by solving a reverse-time stochastic differential equation (SDE) via forward discretization, and that popular denoisers allow for unbiased estimators of this score. In this paper, we demonstrate that an alternative, backward discretization of these SDEs, using proximal maps in place of the score, leads to theoretical and practical benefits. We leverage recent results in _proximal matching_ to learn proximal operators of the log-density and, with them, develop Proximal Diffusion Models (`ProxDM`). Theoretically, we prove that $\widetilde{\mathcal O}(d/\sqrt{\varepsilon})$ steps suffice for the resulting discretization to generate an $\varepsilon$-accurate distribution w.r.t. the KL divergence.
Empirically, we show that two variants of `ProxDM` achieve significantly faster convergence within just a few sampling steps compared to conventional score-matching methods.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 22362
Loading