ELBO-ing Stein MixturesDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Particle-based variational inference, alpha-indexed Stein mixtures, ELBO-within-Stein
TL;DR: Stein mixture can be viewed as matching variational- to target-posterior by the Renyi divergence. This leads to a whole class of inference methods using the Renyi divergence's order.
Abstract: Stein variational gradient descent (SVGD) \citep{DBLP:conf/nips/LiuW16} is a particle-based technique for Bayesian inference. SVGD has recently gained popularity because it combines the ability of variational inference to handle tall data with the modeling power of non-parametric inference. Unfortunately, the number of particles required to represent a model adequately grows exponentially with the dimensionality of the model. Stein mixtures \citep{nalisnick2017variational} alleviate the exponential growth in particles by letting each particle parameterize a distribution. However, the inference algorithm proposed by \cite{nalisnick2017variational} can be numerically unstable. We show that their algorithm corresponds to inference with the R\'enyi $\alpha$-divergence for $\alpha=0$ and that using other values for $\alpha$ can lead to more stable inference. We empirically study the performance of Stein mixtures inferred with different $\alpha$ values on various real-world problems, demonstrating significantly improved results when using $\alpha=1$, which coincides with using the evidence lower bound (ELBO). We call this instance of our algorithm ELBO-within-Stein. A black-box version of the inference algorithm (for arbitrary $\alpha\in \sR$) is available in the deep probabilistic programming language NumPyro \citep{phan2019}.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Probabilistic Methods (eg, variational inference, causal inference, Gaussian processes)
Supplementary Material: zip
9 Replies

Loading