Discriminative Calibration: Check Bayesian Computation from Simulations and Flexible Classifier

Published: 21 Sept 2023, Last Modified: 19 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: simulation based calibration, simulation based inference, Bayesian computation, diagnostics, classifier two-sample test, likelihood-free
TL;DR: We use a classifier+statistical feature approach to diagnose Bayesian computations, applicable to MCMC, variational and simulation based inference. We provide a high-power test and a divergence estimate.
Abstract: To check the accuracy of Bayesian computations, it is common to use rank-based simulation-based calibration (SBC). However, SBC has drawbacks: The test statistic is somewhat ad-hoc, interactions are difficult to examine, multiple testing is a challenge, and the resulting p-value is not a divergence metric. We propose to replace the marginal rank test with a flexible classification approach that learns test statistics from data. This measure typically has a higher statistical power than the SBC test and returns an interpretable divergence measure of miscalibration, computed from classification accuracy. This approach can be used with different data generating processes to address simulation-based inference or traditional inference methods like Markov chain Monte Carlo or variational inference. We illustrate an automated implementation using neural networks and statistically-inspired features, and validate the method with numerical and real data experiments.
Submission Number: 14735
Loading