Effective Bayesian Heteroscedastic Regression with Deep Neural Networks

Published: 21 Sept 2023, Last Modified: 12 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Heteroscedastic Regression, Marginal Likelihood, Bayesian Neural Networks, Uncertainty Estimaton, Model Selection, Laplace Approximation
TL;DR: We propose the first efficient Laplace approximation for heteroscedastic neural networks, show it scales to image data, and derive its posterior predictive.
Abstract: Flexibly quantifying both irreducible aleatoric and model-dependent epistemic uncertainties plays an important role for complex regression problems. While deep neural networks in principle can provide this flexibility and learn heteroscedastic aleatoric uncertainties through non-linear functions, recent works highlight that maximizing the log likelihood objective parameterized by mean and variance can lead to compromised mean fits since the gradient are scaled by the predictive variance, and propose adjustments in line with this premise. We instead propose to use the natural parametrization of the Gaussian, which has been shown to be more stable for heteroscedastic regression based on non-linear feature maps and Gaussian processes. Further, we emphasize the significance of principled regularization of the network parameters and prediction. We therefore propose an efficient Laplace approximation for heteroscedastic neural networks that allows automatic regularization through empirical Bayes and provides epistemic uncertainties, both of which improve generalization. We showcase on a range of regression problems—including a new heteroscedastic image regression benchmark—that our methods are scalable, improve over previous approaches for heteroscedastic regression, and provide epistemic uncertainty without requiring hyperparameter tuning.
Submission Number: 12444
Loading