Keywords: Generative Networks, Adversarial Autoencoders
TL;DR: We propose a major improvement of adversarial autoencoders to produce precise output distributions, along with a rigorous way to measure it.
Abstract: In addition to perceptual quality, the usefulness of a generative model depends on how closely the generated distribution matches the training distribution. Previous efforts in adversarial generative models have focused on reducing "mode collapse", but this term, roughly meaning being unable to generate certain parts of the data distribution, is not clearly defined. In addition, being able to generate every image in the data distribution does not imply reproducing the correct distribution, which additionally requires that each image occur at the same frequency in the generated images as in the training data. Due to the lack of a precise definition and measurement, it has been difficult to evaluate the success of these efforts in producing the correct distribution. In this work we proposes an autoencoder-based adversarial training framework, which ensures that the density of the encoder's aggregate output distribution closely matches the prior latent distribution, which in turn ensures that the distribution of images generated from randomly sampled latent code will closely match the training data. To evaluate our method, we introduce the 3DShapeHD dataset, which has a moderate complexity that goes beyond simplistic toy datasets, but also a exactly known generating process and distribution of features, which enables precise measurements. Using the reduced chi-square statistic, we show significant improvement in the accuracy of the distribution of generated samples. The results also demonstrate that the enhanced diversity of our model improves the ability to generate uncommon features in real-world datasets.
Supplementary Material: zip
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 180
Loading