Ergodic Measure Preserving Flows

Yichuan Zhang, José Miguel Hernández-Lobato, Zoubin Ghahramani

Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Training probabilistic models with neural network components is intractable in most cases and requires to use approximations such as Markov chain Monte Carlo (MCMC), which is not scalable and requires significant hyper-parameter tuning, or mean-field variational inference (VI), which is biased. While there has been attempts at combining both approaches, the resulting methods have some important limitations in theory and in practice. As an alternative, we propose a novel method which is scalable, like mean-field VI, and, due to its theoretical foundation in ergodic theory, is also asymptotically accurate, like MCMC. We test our method on popular benchmark problems with deep generative models and Bayesian neural networks. Our results show that we can outperform existing approximate inference methods.
  • Keywords: Markov chain Monte Carlo, variational inference, deep generative models
  • TL;DR: A novel computational scalable inference framework for training deep generative models and general statistical inference.
0 Replies