Boundary Seeking GANsDownload PDF

15 Feb 2018, 21:29 (edited 10 Feb 2022)ICLR 2018 Conference Blind SubmissionReaders: Everyone
  • Keywords: Generative adversarial networks, generative learning, deep learning, neural networks, adversarial learning, discrete data
  • TL;DR: We address training GANs with discrete data by formulating a policy gradient that generalizes across f-divergences
  • Abstract: Generative adversarial networks are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions. GANs, as normally formulated, rely on the generated samples being completely differentiable w.r.t. the generative parameters, and thus do not work for discrete data. We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator. The importance weights have a strong connection to the decision boundary of the discriminator, and we call our method boundary-seeking GANs (BGANs). We demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation. In addition, the boundary-seeking objective extends to continuous data, which can be used to improve stability of training, and we demonstrate this on Celeba, Large-scale Scene Understanding (LSUN) bedrooms, and Imagenet without conditioning.
  • Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [CelebA](https://paperswithcode.com/dataset/celeba), [LSUN](https://paperswithcode.com/dataset/lsun)
16 Replies

Loading