- Keywords: Bayesian optimisation, Gaussian processes, variational inference, kernel methods, Stein variational methods
- TL;DR: An approach to combine variational inference and Bayesian optimisation to solve complicated inverse problems
- Abstract: Inverse problems are ubiquitous in natural sciences and refer to the challenging task of inferring complex and potentially multi-modal posterior distributions over hidden parameters given a set of observations. Typically, a model of the physical process in the form of differential equations is available but leads to intractable inference over its parameters. While the forward propagation of parameters through the model simulates the evolution of the system, the inverse problem of finding the parameters given the sequence of states is not unique. In this work, we propose a generalisation of the Bayesian optimisation framework to approximate inference. The resulting method learns approximations to the posterior distribution by applying Stein variational gradient descent on top of estimates from a Gaussian process model. Preliminary results demonstrate the method's performance on likelihood-free inference for reinforcement learning environments.