Non-reversibly updating a uniform [0,1] value for accept/reject decisionsDownload PDF

16 Oct 2019 (modified: 20 Oct 2024)AABI 2019Readers: Everyone
Keywords: Markov chain Monte Carlo
TL;DR: A non-reversible way of making accept/reject decisions can be beneficial
Abstract: I show how it can be beneficial to express Metropolis accept/reject decisions in terms of comparison with a uniform [0,1] value, and to then update this uniform value non-reversibly, as part of the Markov chain state, rather than sampling it independently each iteration. This provides a small improvement for random walk Metropolis and Langevin updates in high dimensions. It produces a larger improvement when using Langevin updates with persistent momentum, giving performance comparable to that of Hamiltonian Monte Carlo (HMC) with long trajectories. This is of significance when some variables are updated by other methods, since if HMC is used, these updates can be done only between trajectories, whereas they can be done more often with Langevin updates. This is seen for a Bayesian neural network model, in which connection weights are updated by persistent Langevin or HMC, while hyperparameters are updated by Gibbs sampling.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/non-reversibly-updating-a-uniform-value-for/code)
0 Replies

Loading