Rethinking conditional GAN training: An approach using geometrically structured latent manifoldsDownload PDF

21 May 2021, 20:49 (edited 23 Jan 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Generative modeling, Multimodal output spaces, Conditional generative models
  • TL;DR: A novel generic training algorithm for training conditional GANs for improved diversity, realism and structure
  • Abstract: Conditional GANs (cGAN), in their rudimentary form, suffer from critical drawbacks such as the lack of diversity in generated outputs and distortion between the latent and output manifolds. Although efforts have been made to improve results, they can suffer from unpleasant side-effects such as the topology mismatch between latent and output spaces. In contrast, we tackle this problem from a geometrical perspective and propose a novel training mechanism that increases both the diversity and the visual quality of a vanilla cGAN, by systematically encouraging a bi-lipschitz mapping between the latent and the output manifolds. We validate the efficacy of our solution on a baseline cGAN (i.e., Pix2Pix) which lacks diversity, and show that by only modifying its training mechanism (i.e., with our proposed Pix2Pix-Geo), one can achieve more diverse and realistic outputs on a broad set of image-to-image translation tasks.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/samgregoost/Rethinking-CGANs
15 Replies

Loading