Robust Amortized Bayesian Inference with Self-Consistency Losses on Unlabeled Data

Published: 06 Mar 2025, Last Modified: 24 Apr 2025FPI-ICLR2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bayesian models, amortized inference, robust inference, self-consistency, semi-supervised learning
TL;DR: We make amortized Bayesian inference more robust with only a small amount of unlabeled real data.
Abstract: Neural amortized Bayesian inference (ABI) can solve probabilistic inverse problems orders of magnitude faster than classical methods. However, neural ABI is not yet sufficiently robust for widespread and safe applicability. In particular, when performing inference on observations outside of the scope of the simulated data seen during training, for example, because of model misspecification, the posterior approximations are likely to become highly biased. Due to the bad pre-asymptotic behavior of current neural posterior estimators in the out-of-simulation regime, the resulting estimation biases cannot be fixed in acceptable time by just simulating more training data. In this paper, we propose a semi-supervised approach that enables training not only on (labeled) simulated data generated from the model, but also on unlabeled data originating from any source, including real-world data. To achieve the latter, we exploit Bayesian self-consistency properties that can be transformed into strictly proper losses without requiring knowledge of true parameter values, that is, without requiring data labels. The results of our initial experiments show remarkable improvements in the robustness of ABI on out-of-simulation data. Notably, inference remains accurate even when the observed data lies far outside both the labeled and unlabeled training distributions. If our findings generalize to other scenarios and model classes, our method could offer a significant step forward towards robust neural ABI.
Submission Number: 10
Loading