Variational Bayesian Monte Carlo (VBMC) is a sample-efficient method for approximate Bayesian inference with computationally expensive likelihoods. While VBMC’s local sur- rogate approach provides stable approximations, its conservative exploration strategy and limited evaluation budget can cause it to miss regions of complex posteriors. In this work, we introduce Stacking Variational Bayesian Monte Carlo (S-VBMC), a method that con- structs global posterior approximations by merging independent VBMC runs through a principled and inexpensive post-processing step. Our approach leverages VBMC’s mixture posterior representation and per-component evidence estimates, requiring no additional likelihood evaluations while being naturally parallelizable. We demonstrate S-VBMC’s effectiveness on two synthetic problems designed to challenge VBMC’s exploration capabil- ities and two real-world applications from computational neuroscience, showing substantial improvements in posterior approximation quality across all cases.
Keywords: approximate Bayesian inference, Variational Bayesian Monte Carlo, sample-efficient inference, stacking posterior distributions, surrogate-based inference
TL;DR: In this work we expand a sample-efficient approximate inference method (Variational Bayesian Monte Carlo) to tackle complex target distributions by stacking individual variational posteriors and optimising a common evidence lower bound.
Abstract:
Submission Number: 27
Loading