Optimal Discriminators for GANs with Higher-order Gradient Regularizers

TMLR Paper626 Authors

23 Nov 2022 (modified: 09 May 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: We consider the problem of optimizing the discriminator in generative adversarial networks (GANs) subject to higher-order gradient regularization. We show analytically, via the least-squares (LSGAN) and Wasserstein (WGAN) GAN variants, that the discriminator optimization problem is one of high-dimensional interpolation. The optimal discriminator, derived using variational calculus, turns out to be the solution to a partial differential equation involving the iterated Laplacian or the polyharmonic operator. The solution is implementable in closed-form via polyharmonic radial basis function (RBF) interpolation. In view of the polyharmonic connection, we refer to the corresponding GANs as Poly-LSGAN and Poly-WGAN. As a proof of concept, the analysis is supported by experimental validation on multivariate Gaussians. While the closed-form RBF does not scale favorably with the dimensionality of data for image-space generation, we employ the Poly-WGAN discriminator to transform the latent space distribution of the data to match a Gaussian in a Wasserstein autoencoder (WAE). The closed-form discriminator, motivated by the polyharmonic RBF, results in up to 20\% improvement in terms of Fr{\'e}chet and kernel inception distances over comparable baselines that employ trainable or kernel-based discriminators. The experiments are carried out on standard image datasets such as MNIST, CIFAR-10, CelebA, and LSUN-Churches. The training time in Poly-WGAN is comparable to those of kernel-based methods, while being about two orders faster than GANs with a trainable discriminator.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Jeremie_Mary1
Submission Number: 626
Loading