Abstract: We introduce a new method for training GANs by applying the Wasserstein-2 metric proximal on the generators.
The approach is based on the gradient operator induced by optimal transport, which connects the geometry of sample space and parameter space in implicit deep generative models. From this theory, we obtain an easy-to-implement regularizer for the parameter updates. Our experiments demonstrate that this method improves the speed and stability in training GANs in terms of wall-clock time and Fr\'echet Inception Distance (FID) learning curves.
Keywords: Optimal transport, Wasserstein gradient, Generative adversarial network, Unsupervised learning
TL;DR: We propose the Wasserstein proximal method for training GANs.
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [CelebA](https://paperswithcode.com/dataset/celeba)
9 Replies
Loading