Learning Likelihood-Free Reference Priors

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose two new methods for learning reference priors that do not require access to the model's likelihood function.
Abstract: Simulation modeling offers a flexible approach to constructing high-fidelity synthetic representations of complex real-world systems. However, the increased complexity of such models introduces additional complications, for example when carrying out statistical inference procedures. This has motivated a large and growing literature on *likelihood-free* or *simulation-based* inference methods, which approximate (e.g., Bayesian) inference without assuming access to the simulator's intractable likelihood function. A hitherto neglected problem in the simulation-based Bayesian inference literature is the challenge of constructing minimally informative *reference priors* for complex simulation models. Such priors maximise an expected Kullback-Leibler distance from the prior to the posterior, thereby influencing posterior inferences minimally and enabling an ``objective'' approach to Bayesian inference that does not necessitate the incorporation of strong subjective prior beliefs. In this paper, we propose and test a selection of likelihood-free methods for learning reference priors for simulation models, using variational approximations to these priors and a variety of mutual information estimators. Our experiments demonstrate that good approximations to reference priors for simulation models are in this way attainable, providing a first step towards the development of likelihood-free objective Bayesian inference procedures.
Lay Summary: Scientists often use simulators to better understand large-scale complex systems such as epidemics and financial markets. However, setting up a simulator that perfectly mirrors the real world at the right level of abstraction is difficult. In Bayesian statistics, we attempt to resolve our uncertainty by using data to update our beliefs about the real world. Through the process of collecting data and adjusting our beliefs, we can tweak a simulator until it matches the real-world system we seek to emulate. However, to update our beliefs upon seeing new data, we must first possess beliefs to begin with. In Bayesian statistics, this is known as the problem of prior specification. In cases where we are in a state of relative ignorance about the behaviour of a real world system, we would like our prior beliefs, before having seen any data, to capture our complete lack of knowledge. In particular, we don't want our initial beliefs to have a heavy influence on our stance in the future, once we have seen some data. However, forming such prior beliefs can be difficult especially for complicated phenomena. For example, consider running a simulation of the US stock market. It is difficult to write down all the *plausible ways* in which the stock market could evolve, and eliminate all impossibilities from consideration. To address this problem we propose two methods that learn an appropriate mathematical representation of uninformed beliefs, known as the reference prior, by repeatedly running computer simulations. Through these methods, scientists and practitioners may obtain uninformed beliefs which they can update upon seeing data, without concerns of initial bias clouding their judgement.
Link To Code: https://github.com/joelnmdyer/lf_reference_priors
Primary Area: Probabilistic Methods->Bayesian Models and Methods
Keywords: simulation-based inference, objective Bayes, reference priors, variational approximations, mutual information
Submission Number: 7582
Loading