Variational Linearized Laplace Approximation for Bayesian Deep Learning

Published: 27 May 2024, Last Modified: 02 Jul 2024AABI 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Linearized Laplace Approximation, Uncertainty estimation, Variational Sparse Gaussian Process
TL;DR: Linearized Laplace Approximation based on Variational Sparse Gaussian Process with fixed mean for Deep Learning Uncertainty Estimation
Abstract: The Linearized Laplace Approximation (LLA) has been recently used to perform uncertainty estimation on the predictions of pre-trained deep neural networks (DNNs). However, its widespread application is hindered by significant computational costs, particularly in scenarios with a large number of training points or DNN parameters. Consequently, additional approximations of LLA, such as Kronecker-factored or diagonal approximate GGN matrices, are employed, potentially compromising the model's performance. To address these challenges, we propose a new method for approximating LLA using a variational sparse Gaussian Process (GP). Our method is based on the dual RKHS formulation of GPs and retains as the predictive mean the output of the original DNN. Furthermore, it allows for efficient stochastic optimization, which results in sub-linear training time in the size of the training dataset. Specifically, its training cost is independent of the number of training points. We compare our proposed method against accelerated LLA (ELLA), which relies on the Nystr\"om approximation, as well as other LLA variants employing the sample-then-optimize principle. Experimental results show that our method outperforms these already existing efficient variants of LLA, both in terms of the quality of the predictive distribution and in terms of total computational time.
Submission Number: 2
Loading