Implicitly Bayesian Prediction Rules in Deep Learning

Published: 27 May 2024, Last Modified: 27 May 2024AABI 2024 - Archival TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bayesian inference, exchangeability, conditionally identically distributed, deep learning, Dutch-book
TL;DR: This paper introduces a framework thinking about and quantifying Bayesian-ness of arbitrary, potentially black-box, predictive algorithms.
Abstract: The Bayesian approach leads to coherent updates of predictions under new data, which makes adhering to Bayesian principles appealing in decision-making contexts. Traditionally, integrating Bayesian principles in complex models like deep learning involves setting priors and approximating posteriors, despite the lack of direct parameter interpretation. In this paper, we rethink this approach and consider what characterises a Bayesian prediction rule. Algorithms meeting these criteria can be deemed implicitly Bayesian — they make the same predictions as some Bayesian model, without explicitly manifesting priors and posteriors. We propose how to evaluate a prediction rule's proximity to implicit Bayesianism, introduce results illustrating its benefits, and empirically test it across multiple prediction strategies.
Submission Number: 12
Loading