Demystifying MMD GANs

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we show that the natural estimator for maximum mean discrepancies yields unbiased gradient estimates, which is essential in providing a good signal to the generator. We discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an intergal probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance.
  • TL;DR: MMD GANs have theoretical advantages for SGD and work with smaller critic networks than WGAN-GPs.
  • Keywords: gans, mmd, ipms, wgan, gradient penalty, unbiased gradients

Loading