Keywords: bayesian neural networks, functional priors
TL;DR: We train neural linear models with functional priors to show the benefit over weight space priors and motivate further research.
Abstract: Neural linear models (NLM) and Gaussian processes (GP) are both examples of Bayesian linear regression on rich feature spaces.
In contrast to the widespread use of nonparametric GPs for probabilistic nonlinear regression,
NLMs remain an underused parametric alternative because standard type II maximum likelihood (ML) training
leads to overconfidence outside of the data distribution.
Therefore, we propose to augment this training procedure through functional variational inference (fVI) proposed by Sun et. al. (2019), which is particularly well suited for NLMs due to their closed-form predictive distribution.
Additionally, we investigate whether an appropriate functional prior can guide parametric NLMs to attain nonparametric GP performance, despite using fewer parameters.
Results show that functional priors do improve performance of NLM over ML training, and that the NLM performs on par with weight space BNNs in this setting.
1 Reply
Loading