Sampling from the latent space in Autoencoders: A simple way towards generative models?

TMLR Paper960 Authors

17 Mar 2023 (modified: 18 Jun 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: By sampling from the latent space of an autoencoder and decoding the latent space samples to the original data space, any autoencoder can simply be turned into a generative model. For this to work, it is necessary to model the autoencoder's latent space with a distribution from which samples can be obtained. Several simple possibilities (kernel density estimates, Gaussian distribution) and more sophisticated ones (Gaussian mixture models, copula models, normalization flows) can be thought of and have been tried recently. This study aims to discuss, assess, and compare various techniques that can be used to capture the latent space so that an autoencoder can become a generative model while striving for simplicity. Among them, a new copula-based method, the Empirical Beta Copula Autoencoder, is considered. Furthermore, we provide insights into further aspects of these methods, such as targeted sampling or synthesizing new data with specific features.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We incorporated most of the Reviewers suggestions in the current version. However, we are still working on some suggestions of the reviewers which will be incorporated in a later version of the paper.
Assigned Action Editor: ~Ole_Winther1
Submission Number: 960
Loading