Exact Langevin Dynamics with Stochastic GradientsDownload PDF

Published: 21 Dec 2020, Last Modified: 05 May 2023AABI2020Readers: Everyone
Keywords: stochastic gradient, MCMC, Langevin dynamics, Bayesian deep learning, approximate inference
TL;DR: We show that SGHMC has zero acceptance probability. We fix this, and review a scheme to calculate MH acceptance only after many steps.
Abstract: Stochastic gradient Markov Chain Monte Carlo algorithms are popular samplers for approximate inference, but they are generally biased. We show that many recent versions of these methods (e.g. Chen et al. (2014)) cannot be corrected using Metropolis-Hastings rejection sampling, because their acceptance probability is always zero. We can fix this by employing a sampler with realizable backwards trajectories, such as Gradient-Guided Monte Carlo (Horowitz, 1991), which generalizes stochastic gradient Langevin dynamics (Welling and Teh, 2011) and Hamiltonian Monte Carlo. We show that this sampler can be used with stochastic gradients, yielding nonzero acceptance probabilities which can be computed even across multiple steps.
1 Reply

Loading