Generalizing Hamiltonian Monte Carlo with Neural Networks

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We present a general-purpose method to train Markov Chain Monte Carlo kernels (parameterized by deep neural networks) that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jumped distance, a proxy for mixing speed. We demonstrate significant empirical gains (up to 50 times greater effective sample size) on a collection of simple but challenging distributions. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. Python source code will be open-sourced with the camera-ready paper.
  • TL;DR: General method to train expressive MCMC kernels parameterized with deep neural networks. Given a target distribution p, our method provides a fast-mixing sampler, able to efficiently explore the state space.
  • Keywords: markov, chain, monte, carlo, sampling, posterior, deep, learning, hamiltonian

Loading