Amortizing Bayesian Posterior Inference in Tractable Likelihood Models

TMLR Paper2815 Authors

06 Jun 2024 (modified: 21 Nov 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Bayesian inference provides a natural way of incorporating prior beliefs and assigning a probability measure to the space of hypotheses. However, it is often infeasible in practice as it requires expensive iterative routines like MCMC to approximate the posterior distribution. Not only are these methods computationally expensive, but they must also be re-run whenever new observations are available, making them impractical or of limited use. To alleviate such difficulties, we amortize the posterior parameter inference for probabilistic models through permutation invariant architectures. While this paradigm is briefly explored in Simulation Based Inference (SBI), Neural Processes (NPs) and Gaussian Process (GP) kernel estimation, a more general treatment of amortized Bayesian inference in known likelihood models has been largely unexplored. We rely on the reverse-KL based amortized Variational Inference (VI) approach to train inference systems and compare them with forward-KL based SBI approaches across different architectural setups. We additionally introduce a simple but strong approach to further amortize on the number of features in each observation, allowing a single system to infer variable dimensional parameters. Our thorough experiments demonstrate the effectiveness of our proposed approach, especially in real-world and model misspecification settings.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Alp_Kucukelbir1
Submission Number: 2815
Loading