Variational Inference via Rényi Upper-Lower Bound Optimization

Published: 01 Jan 2022, Last Modified: 19 May 2025ICMLA 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Variational inference provides a way to approximate probability densities. It does so by optimizing an upper or a lower bound on the likelihood of the observed data (the evidence). The classic variational inference approach suggests to maximize the Evidence Lower BOund (ELBO). Recent proposals suggest to optimize the variational Rényi bound (VR) and χ upper bound. However, these estimates are either biased or difficult to approximate, due to a high variance.In this paper we introduce a new upper bound (termed VRLU) which is based on the existing variational Rényi bound. In contrast to the existing VR bound, the Monte Carlo (MC) approximation of the VRLU bound is unbiased. Furthermore, we devise a (sandwiched) upper-lower bound variational inference method (termed VRS) to jointly optimize the upper and lower bounds. We present a set of experiments, designed to evaluate the new VRLU bound, and to compare the VRS method with the classic VAE and the VR methods over a set of digit recognition tasks. The experiments and results demonstrate the VRLU bound advantage, and the wide applicability of the VRS method.
Loading