Hessian-Free Laplace in Bayesian Deep Learning

Published: 27 Oct 2023, Last Modified: 22 Dec 2023RealML-2023EveryoneRevisionsBibTeX
Keywords: Bayesian Neural Networks, Laplace Approximation, Epistemic Uncertainty
TL;DR: Laplace approximation in Bayesian deep learning can be formulated as a finite difference between the network predictions under two point estimates avoiding the need to calculate and invert Hessians.
Abstract: The Laplace approximation (LA) of the Bayesian posterior is a Gaussian distribution centered at the maximum a posteriori estimate. Its appeal in Bayesian deep learning stems from the ability to quantify uncertainty post-hoc (i.e., after standard network parameter optimization), the ease of sampling from the approximate posterior, and the analytic form of model evidence. Uncertainty in turn can direct experimentation. However, an important computational bottleneck of LA is the necessary step of calculating and inverting the Hessian matrix of the log posterior. The Hessian may be approximated in a variety of ways, with quality varying with a number of factors including the network, dataset, and inference task. In this paper, we propose an alternative algorithm that sidesteps Hessian calculation and inversion. The Hessian-free Laplace (HFL) approximation uses curvature of both the log posterior and network prediction to estimate its variance. Two point estimates are required: the standard maximum a posteriori parameters and the optimal parameter under a loss regularized by the network prediction. We show that under standard assumptions of LA in Bayesian deep learning, HFL targets the same variance as LA, and this is empirically explored in small-scale simulated experiments comparing against the exact Hessian.
Submission Number: 45
Loading