The Information-Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Modeling![Download PDF](/images/pdf_icon_blue.svg)
Abstract: A variety of learning objectives have been recently proposed for training generative models. We show that many of them, including InfoGAN, ALI/BiGAN, ALICE, CycleGAN, VAE, $\beta$-VAE, adversarial autoencoders, AVB, and InfoVAE, are Lagrangian duals of the same primal optimization problem. This generalization reveals the implicit modeling trade-offs between flexibility and computational requirements being made by these models. Furthermore, we characterize the class of all objectives that can be optimized under certain computational constraints.
Finally, we show how this new Lagrangian perspective can explain undesirable behavior of existing methods and provide new principled solutions.
Keywords: Generative Models, Variational Autoencoder, Generative Adversarial Network
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/the-information-autoencoding-family-a/code)
9 Replies
Loading