Isolating Latent Structure with Cross-population Variational AutoencodersDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: A variant of the VAE which models data from differing distributions, isolating the latent factors which are unique to each set as well as the shared structure
Abstract: A significant body of recent work has examined variational autoencoders as a powerful approach for tasks which involve modeling the distribution of complex data such as images and text. In this work, we present a framework for modeling multiple data sets which come from differing distributions but which share some common latent structure. By incorporating architectural constraints and using a mutual information regularized form of the variational objective, our method successfully models differing data populations while explicitly encouraging the isolation of the shared and private latent factors. This enables our model to learn useful shared structure across similar tasks and to disentangle cross-population representations in a weakly supervised way. We demonstrate the utility of our method on several applications including image denoising, sub-group discovery, and continual learning.
Keywords: variational autoencoder, latent variable model, probabilistic graphical model, machine learning, deep learning, continual learning
Original Pdf: pdf
4 Replies

Loading