Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form SolutionsDownload PDF

29 Sept 2021, 00:35 (edited 15 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Wasserstein GAN, convex-concave game, saddle points, generative models, quadratic, polynomial activation, convex duality
  • Abstract: Generative Adversarial Networks (GANs) are commonly used for modeling complex distributions of data. Both the generators and discriminators of GANs are often modeled by neural networks, posing a non-transparent optimization problem which is non-convex and non-concave over the generator and discriminator, respectively. Such networks are often heuristically optimized with gradient descent-ascent (GDA), but it is unclear whether the optimization problem contains any saddle points, or whether heuristic methods can find them in practice. In this work, we analyze the training of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality, and for a variety of generators expose the conditions under which Wasserstein GANs can be solved exactly with convex optimization approaches, or can be represented as convex-concave games. Using this convex duality interpretation, we further demonstrate the impact of different activation functions of the discriminator. Our observations are verified with numerical results demonstrating the power of the convex interpretation, with an application in progressive training of convex architectures corresponding to linear generators and quadratic-activation discriminators for CelebA image generation. The code for our experiments is available at https://github.com/ardasahiner/ProCoGAN.
  • One-sentence Summary: We demonstrate that Wasserstein GANs with two-layer discriminators and a variety of generators are equivalent to convex optimization problems or convex-concave games, allowing for global optimization in polynomial time and improved interpretability.
  • Supplementary Material: zip
5 Replies