STOCHASTIC GRADIENT LANGEVIN DYNAMICS THAT EXPLOIT NEURAL NETWORK STRUCTURE

Zachary Nado, Jasper Snoek, Roger Grosse, David Duvenaud, Bowen Xu, James Martens

Feb 12, 2018 (modified: Jun 04, 2018) ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: Tractable approximate Bayesian inference for deep neural networks remains challenging. Stochastic Gradient Langevin Dynamics (SGLD) offers a tractable approximation to the gold standard of Hamiltonian Monte Carlo. We improve on existing methods for SGLD by incorporating a recently-developed tractable approximation of the Fisher information, known as K-FAC, as a preconditioner.
  • Keywords: monte carlo, Bayesian deep networks
  • TL;DR: We use a recent approximation for the Fisher information to improve approximate Bayesian inference for deep neural networks with Langevin Dynamics.
0 Replies

Loading