Generalizing Hamiltonian Monte Carlo with Neural NetworksDownload PDF

15 Feb 2018 (modified: 14 Oct 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We present a general-purpose method to train Markov chain Monte Carlo kernels, parameterized by deep neural networks, that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate large empirical gains on a collection of simple but challenging distributions, for instance achieving a 106x improvement in effective sample size in one case, and mixing when standard HMC makes no measurable progress in a second. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. Python source code will be open-sourced with the camera-ready paper.
TL;DR: General method to train expressive MCMC kernels parameterized with deep neural networks. Given a target distribution p, our method provides a fast-mixing sampler, able to efficiently explore the state space.
Keywords: markov, chain, monte, carlo, sampling, posterior, deep, learning, hamiltonian, mcmc
Code: [![github](/images/github_icon.svg) brain-research/l2hmc](https://github.com/brain-research/l2hmc) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=B1n8LexRZ)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/generalizing-hamiltonian-monte-carlo-with/code)
17 Replies

Loading