Understanding the (Un)interpretability of Natural Image Distributions Using Generative ModelsDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
TL;DR: We examine the relationship between probability density values and image content in non-invertible GANs.
Abstract: Probability density estimation is a classical and well studied problem, but standard density estimation methods have historically lacked the power to model complex and high-dimensional image distributions. More recent generative models leverage the power of neural networks to implicitly learn and represent probability models over complex images. We describe methods to extract explicit probability density estimates from GANs, and explore the properties of these image density functions. We perform sanity check experiments to provide evidence that these probabilities are reasonable. However, we also show that density functions of natural images are difficult to interpret and thus limited in use. We study reasons for this lack of interpretability, and suggest that we can get better interpretability by doing density estimation on latent representations of images.
Keywords: GANs, Generative Models, Density Estimation
Original Pdf: pdf
6 Replies

Loading