On the Discrimination-Generalization Tradeoff in GANsDownload PDF

15 Feb 2018 (modified: 23 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Generative adversarial training can be generally understood as minimizing certain moment matching loss defined by a set of discriminator functions, typically neural networks. The discriminator set should be large enough to be able to uniquely identify the true distribution (discriminative), and also be small enough to go beyond memorizing samples (generalizable). In this paper, we show that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions. This is a very mild condition satisfied even by neural networks with a single neuron. Further, we develop generalization bounds between the learned distribution and true distribution under different evaluation metrics. When evaluated with neural distance, our bounds show that generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. When evaluated with KL divergence, our bound provides an explanation on the counter-intuitive behaviors of testing likelihood in GAN training. Our analysis sheds lights on understanding the practical performance of GANs.
TL;DR: This paper studies the discrimination and generalization properties of GANs when the discriminator set is a restricted function class like neural networks.
Keywords: generative adversarial network, discrimination, generalization
13 Replies

Loading