Analyzing GANs with Generative Scattering Networks

Anonymous

Nov 07, 2017 (modified: Nov 07, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: GANs provide spectacular image generations from Gaussian white noise, which interpolate images through deformations, with little mathematical justifications. We show that such generators do not require to learn a discriminator or learn an embedding space. Deformation and Gaussianization properties provide strong constraints allowing to specify the embedding space operator. It is implemented with a multiscale scattering transform. The resulting generator is computed by inverting the embedding operator, with a deep convolutional network which implements a sparse inversion. This provides a statistical framework to understand estimations of generators. The resulting generative scattering networks produce images of good quality at a fraction of the computational cost and can learn from much smaller training sets. They define new classes of high-dimensional stochastic models for non-stationary and non-Gaussian processes.
  • TL;DR: We introduce Generative Convolutional Networks which do not require to learn a discriminator, and which computes the generator by inverting an embedding defined by a wavelet scattering transform.
  • Keywords: Unsupervised Learning, Inverse Problems, Convolutional Networks, Generative Models, Scattering Transform

Loading