Keywords: Uncertainty, Bayesian, Distribution Shift, Neural Networks, Generative Models
TL;DR: We propose a new method, Generative Posterior Networks (GPNs), a generative model that uses unlabeled data to estimate epistemic uncertainty by regularizing the network towards samples from the prior.
Abstract: In many real-world problems, there is a limited set of training data, but an abundance of unlabeled data. We propose a new method, Generative Posterior Networks (GPNs), that uses unlabeled data to estimate epistemic uncertainty in high-dimensional problems. A GPN is a generative model that, given a prior distribution over functions, approximates the posterior distribution directly by regularizing the network towards samples from the prior. We prove theoretically that our method indeed approximates the Bayesian posterior and show empirically that it improves epistemic uncertainty estimation and scalability over competing methods.
1 Reply
Loading