Measuring GAN Training in Real TimeDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Generative adversarial networks, Evaluation
Abstract: Generative Adversarial Networks (GAN) are popular generative models of images. Although researchers proposed variants of GAN for different applications, evaluating and comparing GANs is still challenging as GANs may have many failure cases such as low visual quality and model collapse. To alleviate this issue, we propose a novel framework to evaluate the training stability (S), visual quality (Q), and mode diversity (D) of GAN simultaneously. SQD requires only a moderate number of samples, allowing real-time monitoring of the training dynamics of GAN. We showcase the utility of the SQD framework on prevalent GANs and discovered that the gradient penalty (Gulrajani et al., 2017) regularization significantly improves the performance of GAN. We also compare the gradient penalty regularization with other regularization methods and reveal that enforcing the 1-Lipschitz condition of the discriminator network stabilizes GAN training.
One-sentence Summary: We propose a new evaluation framework for GANs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=O4_qH79rWl
5 Replies

Loading