Accelerating Convergence of Replica Exchange Stochastic Gradient MCMC via Variance ReductionDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: variance reduction, replica exchange, parallel tempering, stochastic gradient Langevin dynamics, uncertainty quantification, change of measure, generalized Girsanov theorem, Dirichlet form, Markov jump process
Abstract: Replica exchange stochastic gradient Langevin dynamics (reSGLD) has shown promise in accelerating the convergence in non-convex learning; however, an excessively large correction for avoiding biases from noisy energy estimators has limited the potential of the acceleration. To address this issue, we study the variance reduction for noisy energy estimators, which promotes much more effective swaps. Theoretically, we provide a non-asymptotic analysis on the exponential convergence for the underlying continuous-time Markov jump process; moreover, we consider a generalized Girsanov theorem which includes the change of Poisson measure to overcome the crude discretization based on the Gr\"{o}wall's inequality and yields a much tighter error in the 2-Wasserstein ($\mathcal{W}_2$) distance. Numerically, we conduct extensive experiments and obtain state-of-the-art results in optimization and uncertainty estimates for synthetic experiments and image data.
One-sentence Summary: We propose a variance-reduced replica-exchange stochastic gradient Langevin dynamics to reduce the variance of the energy estimators to accelerate the convergence.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) WayneDW/Variance_Reduced_Replica_Exchange_SGMCMC](https://github.com/WayneDW/Variance_Reduced_Replica_Exchange_SGMCMC)
10 Replies

Loading