Abstract: Approximate Bayesian Inference typically revolves around computing the posterior parameter distribution. The main practical interest, however, often lies in a model’s predictions rather than its parameters. In this work, we propose to bypass the posterior, focusing directly on approximating the posterior predictive distribution. We achieve this by drawing inspiration from self-supervised and semi-supervised learning. Essentially, we quantify a Bayesian model’s predictive uncertainty by refitting on self-predicted data. The idea is strikingly simple: If a model assigns high likelihood to self-predicted data, these predictions are of low uncertainty, and vice versa. The modular structure of our Self-Supervised Laplace Approximation (SSLA) further allows to plug in different prior specifications, enabling classical Bayesian sensitivity (w.r.t. prior choice) analysis. In order to bypass expensive refitting, we further introduce an approximate version of SSLA, called ASSLA. We study (A)SSLA both theoretically and empirically by employing it in models ranging from Bayesian linear models to Bayesian neural networks. Our approximations outperform classical Laplace approximations on a wide array of both simulated and real-world datasets.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Bruno_Loureiro1
Submission Number: 6712
Loading