Keywords: black-box variational inference, stochastic gradient descent, Bayesian inference, variational inference, probabilistic machine learning, Bayesian machine learning, variational Bayes
TL;DR: This paper provides the first convergence proof for black-box variational inference as-is, from which practical insights are obtained
Abstract: We provide the first convergence guarantee for black-box variational inference (BBVI) with the reparameterization gradient.
While preliminary investigations worked on simplified versions of BBVI (e.g., bounded domain, bounded support, only optimizing for the scale, and such), our setup does not need any such algorithmic modifications.
Our results hold for log-smooth posterior densities with and without strong log-concavity and the location-scale variational family.
Notably, our analysis reveals that certain algorithm design choices commonly employed in practice, such as nonlinear parameterizations of the scale matrix, can result in suboptimal convergence rates.
Fortunately, running BBVI with proximal stochastic gradient descent fixes these limitations and thus achieves the strongest known convergence guarantees.
We evaluate this theoretical insight by comparing proximal SGD against other standard implementations of BBVI on large-scale Bayesian inference problems.
Supplementary Material: zip
Submission Number: 759
Loading