Revisiting Auxiliary Latent Variables in Generative ModelsDownload PDF

Published: 03 May 2019, Last Modified: 05 May 2023DeepGenStruct 2019Readers: Everyone
Keywords: variational inference, monte carlo objectives, VAE, IWAE, sampling, contrastive predictive coding, CPC, noise contrastive estimation, NCE, auxiliary variable variational inference, generative modeling, energy-based models
TL;DR: Monte Carlo Objectives are analyzed using auxiliary variable variational inference, yielding a new analysis of CPC and NCE as well as a new generative model.
Abstract: Extending models with auxiliary latent variables is a well-known technique to in-crease model expressivity. Bachman & Precup (2015); Naesseth et al. (2018); Cremer et al. (2017); Domke & Sheldon (2018) show that Importance Weighted Autoencoders (IWAE) (Burda et al., 2015) can be viewed as extending the variational family with auxiliary latent variables. Similarly, we show that this view encompasses many of the recent developments in variational bounds (Maddisonet al., 2017; Naesseth et al., 2018; Le et al., 2017; Yin & Zhou, 2018; Molchanovet al., 2018; Sobolev & Vetrov, 2018). The success of enriching the variational family with auxiliary latent variables motivates applying the same techniques to the generative model. We develop a generative model analogous to the IWAE bound and empirically show that it outperforms the recently proposed Learned Accept/Reject Sampling algorithm (Bauer & Mnih, 2018), while being substantially easier to implement. Furthermore, we show that this generative process provides new insights on ranking Noise Contrastive Estimation (Jozefowicz et al.,2016; Ma & Collins, 2018) and Contrastive Predictive Coding (Oord et al., 2018).
3 Replies

Loading