Keywords: Laplace approximation, uncertainty quantification, greedy subset selection, posterior predictive distributions
TL;DR: In this paper, we build upon work on uncertainty quantification for Neural Networks via sparse Laplace approximation by proposing two novel methods: (1) greedy subset, and (2) gradient based thresholding.
Abstract: Laplace approximation is arguably the simplest approach for uncertainty quantification using intractable posteriors associated with deep neural networks. While Laplace approximation based methods are widely studied, they are not computationally feasible due to the involved cost of inverting a (large) Hessian matrix. This has led to an emerging line of work which develops lower dimensional or sparse approximations for the Hessian. In this paper, we build upon this work by proposing two novel sparse approximations of the Hessian: (1) greedy subset selection, and (2) gradient based thresholding. We show via simulations that these methods perform well when compared to current benchmarks over a broad range of experimental settings.
Submission Number: 93
Loading