Keywords: Varaitional inference, reject sampling, implict distribution
TL;DR: We introduce a novel method called Implicit Variational Rejection Sampling (IVRS), which integrates implicit distributions with rejection sampling to enhance the approximation of the posterior distribution
Abstract: Variational Inference (VI) is a cornerstone technique in Bayesian machine learning, employed to approximate complex posterior distributions. However, traditional VI methods often rely on mean-field assumptions, which may inadequately capture the true posterior's complexity. To address this limitation, recent advancements have utilized neural networks to model implicit distributions, thereby offering increased flexibility. Despite this, the practical constraints of neural network architectures can still result in inaccuracies in posterior approximations. In this work, we introduce a novel method called Implicit Variational Rejection Sampling (IVRS), which integrates implicit distributions with rejection sampling to enhance the approximation of the posterior distribution. Our method employs neural networks to construct implicit proposal distributions and utilizes rejection sampling with a meticulously designed acceptance probability function. A discriminator network is employed to estimate the density ratio between the implicit proposal and the true posterior, thereby refining the approximation. We propose the Implicit Resampling Evidence Lower Bound (IR-ELBO) as a metric to characterize the quality of the resampled distribution, enabling the derivation of a tighter variational lower bound. Experimental results demonstrate that our method outperforms traditional variational inference techniques in terms of both accuracy and efficiency, leading to significant improvements in inference performance. This work not only showcases the effective combination of implicit distributions and rejection sampling but also offers a novel perspective and methodology for advancing variational inference.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1505
Loading