Constrained Langevin Algorithms with L-mixing External Random VariablesDownload PDF

Published: 31 Oct 2022, Last Modified: 27 Dec 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Langevin algorithms, L-mixing processes, Gradient descent methods, Non-convex optimization, Non-asymptotic analysis, Markov Chain Monte Carlo sampling
Abstract: Langevin algorithms are gradient descent methods augmented with additive noise, and are widely used in Markov Chain Monte Carlo (MCMC) sampling, optimization, and machine learning. In recent years, the non-asymptotic analysis of Langevin algorithms for non-convex learning has been extensively explored. For constrained problems with non-convex losses over a compact convex domain with IID data variables, the projected Langevin algorithm achieves a deviation of $O(T^{-1/4} (\log T)^{1/2})$ from its target distribution \cite{lamperski2021projected} in $1$-Wasserstein distance. In this paper, we obtain a deviation of $O(T^{-1/2} \log T)$ in $1$-Wasserstein distance for non-convex losses with $L$-mixing data variables and polyhedral constraints (which are not necessarily bounded). This improves on the previous bound for constrained problems and matches the best-known bound for unconstrained problems.
Supplementary Material: pdf
13 Replies

Loading