Intriguing Properties of Modern GANs

TMLR Paper2185 Authors

12 Feb 2024 (modified: 17 Jun 2024)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Modern GANs achieve remarkable performance in terms of generating realistic and diverse samples. This has led many to believe that "GANs capture the training data manifold". In this work we show that this interpretation is wrong. We empirically show that the manifold learned by modern GANs does not fit the training distribution: specifically the manifold does not pass through the training examples and passes closer to out-of-distribution images than to in-distribution images. We also investigate the distribution over images implied by the prior over the latent codes and study whether modern GANs learn a density that approximates the training distribution. Surprisingly, we find that the learned density is very far from the data distribution and that GANs tend to assign higher density to out-of-distribution images. Finally, we demonstrate that the set of images used to train modern GANs are often not part of the typical set described by the GANs' distribution.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: A new version based on the reviewer's comments, including: a more fleshed out section on related works, a better visualization of typicality results and more experimental details.
Assigned Action Editor: ~Zhihui_Zhu1
Submission Number: 2185
Loading