The Variational InfoMax AutoEncoder

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • Keywords: autoencoder, information theory, infomax, vae
  • TL;DR: We propose a VIMAE, a variational autoencoder learning both a good generative model and disentangled representations
  • Abstract: We propose the Variational InfoMax AutoEncoder (VIMAE), an autoencoder based on a new learning principle for unsupervised models: the Capacity-Constrained InfoMax, which allows the learning of a disentangled representation while maintaining optimal generative performance. The variational capacity of an autoencoder is defined and we investigate its role. We associate the two main properties of a Variational AutoEncoder (VAE), generation quality and disentangled representation, to two different information concepts, respectively Mutual Information and network capacity. We deduce that a small capacity autoencoder tends to learn a more robust and disentangled representation than a high capacity one. This observation is confirmed by the computational experiments.
  • Code: https://drive.google.com/drive/folders/10DFddqa6THH9lavOzBVGu5iYAoOmQfKQ?usp=sharing
0 Replies

Loading