STOCHASTIC GRADIENT LANGEVIN DYNAMICS THAT EXPLOIT NEURAL NETWORK STRUCTUREDownload PDF

Feb 12, 2018 (edited Jun 04, 2018)ICLR 2018 Workshop SubmissionReaders: Everyone
  • Keywords: monte carlo, Bayesian deep networks
  • TL;DR: We use a recent approximation for the Fisher information to improve approximate Bayesian inference for deep neural networks with Langevin Dynamics.
  • Abstract: Tractable approximate Bayesian inference for deep neural networks remains challenging. Stochastic Gradient Langevin Dynamics (SGLD) offers a tractable approximation to the gold standard of Hamiltonian Monte Carlo. We improve on existing methods for SGLD by incorporating a recently-developed tractable approximation of the Fisher information, known as K-FAC, as a preconditioner.
4 Replies

Loading