Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator ArchitecturesDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design. The discrepancy between the minimax and maximin objective values could serve as a proxy for the difficulties that the alternating gradient descent encounters in the optimization of GANs. In this work, we give new results on the benefits of multi-generator architecture of GANs. We show that the minimax gap shrinks to \epsilon as the number of generators increases with rate O(1/\epsilon). This improves over the best-known result of O(1/\epsilon^2). At the core of our techniques is a novel application of Shapley-Folkman lemma to the generic minimax problem, where in the literature the technique was only known to work when the objective function is restricted to the Lagrangian function of a constraint optimization problem. Our proposed Stackelberg GAN performs well experimentally in both synthetic and real-world datasets, improving Frechet Inception Distance by 14.61% over the previous multi-generator GANs on the benchmark datasets.
Keywords: generative adversarial nets, minimax duality gap, equilibrium
TL;DR: We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design, with theoretical guarantees.
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [MNIST](https://paperswithcode.com/dataset/mnist)
14 Replies

Loading