On Self Modulation for Generative Adversarial NetworksDownload PDF

Published: 21 Dec 2018, Last Modified: 29 Sept 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Training Generative Adversarial Networks (GANs) is notoriously challenging. We propose and study an architectural modification, self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings. Intuitively, self-modulation allows the intermediate feature maps of a generator to change as a function of the input noise vector. While reminiscent of other conditioning techniques, it requires no labeled data. In a large-scale empirical study we observe a relative decrease of 5%-35% in FID. Furthermore, all else being equal, adding this modification to the generator leads to improved performance in 124/144 (86%) of the studied settings. Self-modulation is a simple architectural change that requires no additional parameter tuning, which suggests that it can be applied readily to any GAN.
Keywords: unsupervised learning, generative adversarial networks, deep generative modelling
TL;DR: A simple GAN modification that improves performance across many losses, architectures, regularization schemes, and datasets.
Code: [![github](/images/github_icon.svg) google/compare_gan](https://github.com/google/compare_gan) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=Hkl5aoR5tm)
Data: [CelebA-HQ](https://paperswithcode.com/dataset/celeba-hq), [LSUN](https://paperswithcode.com/dataset/lsun)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/on-self-modulation-for-generative-adversarial/code)
8 Replies

Loading