Difference-Seeking Generative Adversarial NetworkDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: We propose a novel algorithm, Difference-Seeking Generative Adversarial Network (DSGAN), developed from traditional GAN. DSGAN considers the scenario that the training samples of target distribution, $p_{t}$, are difficult to collect. Suppose there are two distributions $p_{\bar{d}}$ and $p_{d}$ such that the density of the target distribution can be the differences between the densities of $p_{\bar{d}}$ and $p_{d}$. We show how to learn the target distribution $p_{t}$ only via samples from $p_{d}$ and $p_{\bar{d}}$ (relatively easy to obtain). DSGAN has the flexibility to produce samples from various target distributions (e.g. the out-of-distribution). Two key applications, semi-supervised learning and adversarial training, are taken as examples to validate the effectiveness of DSGAN. We also provide theoretical analyses about the convergence of DSGAN.
Keywords: Generative Adversarial Network, Semi-Supervised Learning, Adversarial Training
TL;DR: We proposed "Difference-Seeking Generative Adversarial Network" (DSGAN) model to learn the target distribution which is hard to collect training data.
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [SVHN](https://paperswithcode.com/dataset/svhn)
8 Replies

Loading