Training Generative Adversarial Networks from Incomplete Observations using Factorised Discriminators

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • TL;DR: We decompose the discriminator in a GAN in a principled way so that each component can be independently trained on different parts of the input. The resulting "FactorGAN" can be used for semi-supervised learning and in missing data scenarios.
  • Abstract: Generative adversarial networks (GANs) have shown great success in applications such as image generation and inpainting. However, they typically require large datasets, which are often not available, especially in the context of prediction tasks such as image segmentation that require labels. Therefore, methods such as the CycleGAN use more easily available unlabelled data, but do not offer a way to leverage additional labelled data for improved performance. To address this shortcoming, we show how to factorise the joint data distribution into a set of lower-dimensional distributions along with their dependencies. This allows splitting the discriminator in a GAN into multiple "sub-discriminators" that can be independently trained from incomplete observations. Their outputs can be combined to estimate the density ratio between the joint real and the generator distribution, which enables training generators as in the original GAN framework. We apply our method to image generation, image segmentation and audio source separation, and obtain improved performance over a standard GAN when additional incomplete training examples are available. For the Cityscapes segmentation task in particular, our method also improves accuracy by an absolute 13.6% over CycleGAN while using only 25 additional paired examples.
  • Code: https://www.dropbox.com/s/gtc7m7pc4n2yt05/source.zip?dl=1
  • Keywords: Adversarial Learning, Semi-supervised Learning, Image generation, Image segmentation, Missing Data
0 Replies

Loading