Wasserstein-Bounded Generative Adversarial NetworksDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: Propose an improved framework for WGANs and demonstrate its better performance in theory and practice.
Abstract: In the field of Generative Adversarial Networks (GANs), how to design a stable training strategy remains an open problem. Wasserstein GANs have largely promoted the stability over the original GANs by introducing Wasserstein distance, but still remain unstable and are prone to a variety of failure modes. In this paper, we present a general framework named Wasserstein-Bounded GAN (WBGAN), which improves a large family of WGAN-based approaches by simply adding an upper-bound constraint to the Wasserstein term. Furthermore, we show that WBGAN can reasonably measure the difference of distributions which almost have no intersection. Experiments demonstrate that WBGAN can stabilize as well as accelerate convergence in the training processes of a series of WGAN-based variants.
Code: https://github.com/AnonymousGFR/wbgan.pytorch
Keywords: GAN, WGAN, GENERATIVE ADVERSARIAL NETWORKS
Original Pdf: pdf
4 Replies

Loading