Amortized Bayesian Inference with Hybrid Expert-in-the-Loop and Learnable Summary Statistics

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Bayesian inference, summary statistics, generative models, amortized inference, expert-in-the-loop
TL;DR: We propose an integrative approach to Bayesian inference with neural networks combining both hand-crafted (i.e., domain expert) and end-to-end learned summary statistics.
Abstract: Amortized Bayesian inference (ABI), a subset of simulation-based inference (SBI) fueled by neural networks, has rapidly grown in popularity across diverse scientific fields. Summary statistics are an essential dimensionality reduction component of ABI workflows and most methods to-date rely either on hand-crafted (i.e., based on domain expertise) or end-to-end learned summary statistics. In this work, we explore three hybrid methods to harness the complementary strengths of both sources. The first method directly conditions a neural approximator on both summary types, thereby extending traditional end-to-end approaches in a straightforward way. The second method embeds both expert and learned summaries into a joint representation space which is explicitly optimized to encode decorrelated features. The third method employs an auxiliary generative model to learn a latent summary representation that is statistically independent from the expert summaries. We explore various aspects of our hybrid methodology across different experiments and model instances, including perfect domain expertise and imperfect artificial experts represented by pre-trained neural networks. Our empirical results suggest that hybrid representations can improve parameter estimation and model comparison in settings of scientific interest, warranting the viability of an "expert-in-the-loop" approach. The performance gains are especially promising in scenarios with low to medium simulation budgets.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8356
Loading