Abstract: The standard interpretation of importance-weighted autoencoders is that they maximize a tighter lower bound on the marginal likelihood. We give an alternate interpretation of this procedure: that it optimizes the standard variational lower bound, but using a more complex distribution. We formally derive this result, and visualize the implicit importance-weighted approximate posterior.
TL;DR: IWAE optimizes the standard variational lowerbound, but using a more complex variational distribution
Keywords: Unsupervised Learning
Conflicts: cs.toronto.edu, harvard.edu, cam.ac.uk
3 Replies
Loading