Convergence in KL and Rényi Divergence of the Unadjusted Langevin Algorithm Using Estimated ScoreDownload PDF

Published: 29 Nov 2022, Last Modified: 05 May 2023SBM 2022 PosterReaders: Everyone
Keywords: Scaore-based Generative Modeling, Diffusion Model, Langevin Dynamics
TL;DR: We prove convergence of the Unadjusted Langevin Algorithm (ULA) for sampling using an estimated score under a minimal sufficient assumption on the error of the score estimator.
Abstract: We study Inexact Langevin Algorithm (ILA) for sampling using an estimated score function when the target distribution satisfies log-Sobolev inequality (LSI), motivated by Score-based Generative Modeling (SGM). We prove convergence in Kullback-Leibler (KL) divergence under a sufficient assumption on the error of score estimator called bounded Moment Generating Function (MGF) assumption. Our assumption is weaker than the previous assumption which requires the error has finite $L^\infty$ norm everywhere. Under the $L^\infty$ error assumption, we also prove convergence in R\'enyi divergence, which is stronger than KL divergence. On the other hand, under $L^p$ error assumption for any $1 \leq p < \infty$ which is weaker than bounded MGF assumption, we show that the stationary distribution of Langevin dynamics with an $L^p$-accurate score estimator can be far away from the desired distribution. Thus having an $L^p$-accurate score estimator cannot guarantee convergence. Our results suggest controlling mean squared error which is the form of commonly used loss function when using neural network to estimate score function is not enough to guarantee the upstream algorithm will converge, hence in order to get a theoretical guarantee we need a stronger control over the error in score matching. Despite requiring an exponentially decaying error probability, we give an example to demonstrate the bounded MGF assumption is achievable when using Kernel Density Estimation (KDE)-based score estimator.
Student Paper: Yes
1 Reply

Loading