Scoring Rule Training for Simulation-Based Inference

TMLR Paper2060 Authors

17 Jan 2024 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Bayesian Simulation-Based Inference (SBI) yields posterior approximations for simulator models with intractable likelihood. Recent methods employ normalizing flows for SBI, based on invertible neural networks parametrizing a flexible and tractable density approximation, typically trained via maximum likelihood on simulated parameter-observation pairs. In contrast, GATSBI (Ramesh et al., 2022) approximated the posterior with generative networks, which pose no constraints on the neural network, thus scaling better to high-dimensional and structured data but losing access to the density. GATSBI relies on adversarial training, which is unstable and can lead to a learned distribution underestimating the uncertainty. Here, we introduce Scoring Rule training for SBI (ScoRuTSBI), applying for the first time an overlooked adversarial-free training approach for generative networks to SBI. On our two high-dimensional examples, we found ScoRuTSBI performs better with shorter training time than GATSBI; moreover, ScoRuTSBI outperforms methods based on normalizing flows on one of the high-dimensional examples, while performing equally on the other. Conversely, ScoRuTSBI and GATSBI are considerably outperformed by normalizing-flow methods in low-dimensional examples.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~George_Papamakarios1
Submission Number: 2060
Loading