Keywords: Bayesian Deep Learning, Uncertainty Quantification
TL;DR: Plenty work has been done on how to infer posterior in Bayesian deep learning, while little has been done on how to make prediction from posterior. We propose a new way to do it.
Abstract: The rising interest in Bayesian deep learning (BDL) has led to a plethora of methods for estimating the posterior distribution. However, efficient computation of inferences, such as predictions, has been largely overlooked with Monte Carlo integration remaining the standard. In this work we examine streamlining prediction in BDL through a single forward pass without sampling. For this we use local linearisation on activation functions and local Gaussian approximations at linear layers. Thus allowing us to analytically compute an approximation to the posterior predictive distribution. We showcase our approach for both MLP and transformer architectures and assess its performance on regression and classification tasks.
Submission Number: 69
Loading