Distributional Adversarial NetworksDownload PDF

12 Feb 2018 (modified: 07 Apr 2024)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: In most current formulations of adversarial training, the discriminators can be expressed as single-input operators, that is, the mapping they define is separable over observations. In this work, we argue that this property might help explain the infamous mode collapse phenomenon in adversarially-trained generative models. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose distributional adversaries that operate on samples, i.e., on sets of multiple points drawn from a distribution, rather than on single observations. We show how they can be easily implemented on top of existing models. Various experimental results show that generators trained in combination with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with observation-wise prediction discriminators. In addition, the application of our framework to domain adaptation results in strong improvement over baselines.
TL;DR: We show that the mode collapse problem in GANs may be explained by a lack of information sharing between observations in a training batch, and propose a distribution-based framework for globally sharing information between gradients that leads to more stable and effective adversarial training.
Keywords: adversarial learning, generative model, domain adaptation, two-sample test
Code: [![github](/images/github_icon.svg) ChengtaoLi/dan](https://github.com/ChengtaoLi/dan)
Data: [CelebA](https://paperswithcode.com/dataset/celeba), [MNIST-M](https://paperswithcode.com/dataset/mnist-m)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:1706.09549/code)
1 Reply

Loading