Consistency Regularization for Generative Adversarial Networks

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Generative Adversarial Networks are plagued by training instability, despite considerable research effort. Progress has been made on this topic, but many of the proposed interventions are complicated, computationally expensive, or both. In this work, we propose a simple and effective training stabilizer based on the notion of Consistency Regularization - a popular technique in the Semi-Supervised Learning literature. In particular, we augment data passing into the GAN discriminator and penalize the sensitivity of the ultimate layer of the discriminator to these augmentations. This regularization reduces memorization of the training data and demonstrably increases the robustness of the discriminator to input perturbations. We conduct a series of ablation studies to demonstrate that the consistency regularization is compatible with various GAN architectures and loss functions. Moreover, the proposed simple regularization can consistently improve these different GANs variants significantly. Finally, we show that applying consistency regularization to GANs improves state-of-the-art FID scores from 14.73 to 11.67 on the CIFAR-10 dataset.
  • Keywords: Generative Adversarial Networks, Consistency Regularization, GAN
0 Replies

Loading