Posterior Refinement Improves Sample Efficiency in Bayesian Neural NetworksDownload PDF

Published: 31 Oct 2022, Last Modified: 10 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Bayesian neural networks, predictive calibration, normalizing flows
Abstract: Monte Carlo (MC) integration is the _de facto_ method for approximating the predictive distribution of Bayesian neural networks (BNNs). But, even with many MC samples, Gaussian-based BNNs could still yield bad predictive performance due to the posterior approximation's error. Meanwhile, alternatives to MC integration are expensive. In this work, we experimentally show that the key to good MC-approximated predictive distributions is the quality of the approximate posterior itself. However, previous methods for obtaining accurate posterior approximations are expensive and non-trivial to implement. We, therefore, propose to refine Gaussian approximate posteriors with normalizing flows. When applied to last-layer BNNs, it yields a simple, cost-efficient, _post hoc_ method for improving pre-existing parametric approximations. We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.
TL;DR: We study common problems with crude BNN approximate posteriors and propose a simple technique for refining them.
Supplementary Material: pdf
13 Replies

Loading