Generative networks as inverse problems with Scattering transformsDownload PDF

15 Feb 2018 (modified: 07 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Generative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be optimized with a discriminator or an encoder. The embedding is Lipschitz continuous to deformations so that generators transform linear interpolations between input white noise vectors into deformations between output images. This embedding is computed with a wavelet Scattering transform. Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder.
TL;DR: We introduce generative networks that do not require to be learned with a discriminator or an encoder; they are obtained by inverting a special embedding operator defined by a wavelet Scattering transform.
Keywords: Unsupervised Learning, Inverse Problems, Convolutional Networks, Generative Models, Scattering Transform
Code: [![github](/images/github_icon.svg) tomas-angles/generative-scattering-networks](https://github.com/tomas-angles/generative-scattering-networks)
Data: [CelebA](https://paperswithcode.com/dataset/celeba), [LSUN](https://paperswithcode.com/dataset/lsun)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1805.06621/code)
9 Replies

Loading