No-Regret Approximate Inference via Bayesian OptimisationDownload PDF

Published: 25 Jul 2021, Last Modified: 05 May 2023TPM 2021Readers: Everyone
Keywords: approximate inference, Gaussian processes, Bayesian optimization, MCMC, Bayesian inference, kernel methods, RKHS
TL;DR: We provide an algorithm for approximate Bayesian inference with asymptotically-vanishing theoretical upper bounds on the KL divergence with respect to the true posterior.
Abstract: We consider Bayesian inference problems where the likelihood function is either expensive to evaluate or only available via noisy estimates. This setting encompasses application scenarios involving, for example, large datasets or models whose likelihood evaluations require expensive simulations. We formulate this problem within a Bayesian optimisation framework over a space of probability distributions and derive an upper confidence bound (UCB) algorithm to propose non-parametric distribution candidates. The algorithm is designed to minimise regret, which is defined as the Kullback-Leibler divergence with respect to the true posterior in this case. Equipped with a Gaussian process surrogate model, we show that the resulting UCB algorithm achieves asymptotically no regret. The method can be easily implemented as a batch Bayesian optimisation algorithm whose point evaluations are selected via Markov chain Monte Carlo. Experimental results demonstrate the method's performance on inference problems.
1 Reply

Loading