Keywords: Bayesian optimal experiment design, expected information gain, mutual information, model evidence, sequential Monte Carlo, Markov chain Monte Carlo, Bayesian methods
TL;DR: We propose a sequential Monte Carlo estimator for expected information gain that reduces computation in Bayesian optimal experiment design using backward tempering.
Abstract: Expected information gain (EIG) is a crucial quantity in Bayesian optimal experimental design (BOED), quantifying how useful an experiment is by the amount we expect the posterior to differ from the prior.
However, evaluating the EIG can be computationally expensive since it generally requires estimating the posterior normalizing constant.
In this work, we leverage two idiosyncrasies of BOED to improve efficiency of EIG estimation via sequential Monte Carlo (SMC).
First, in BOED we simulate the data and thus know the true underlying parameters.
Second, we ultimately care about the EIG, not the individual normalizing constants.
Often we observe that the Monte Carlo variance of standard SMC estimators for the normalizing constant of a single dataset are significantly lower than the variance of the normalizing constants across datasets; the latter thus contributes the majority of the variance for EIG estimates.
This suggests the potential to slightly increase variance while drastically decreasing computation time by reducing the SMC population size, which leads us to an EIG-specific SMC estimator that starts with a only a single sample from the posterior and tempers \textit{backwards} towards the prior. Using this single-sample estimator, which we call reverse-annealed SMC (RA-SMC), we show that it is possible to estimate EIG with orders of magnitude fewer likelihood evaluations in three models: a four-dimensional spring-mass, a six-dimensional Johnson-Cook model and a four-dimensional source-finding problem.
Primary Area: Probabilistic methods (e.g., variational inference, causal inference, Gaussian processes)
Submission Number: 25971
Loading