An Improved Composite Functional Gradient Learning by Wasserstein Regularization for Generative adversarial networks
Abstract: Generative adversarial networks (GANs) are usually trained by a minimax game which is notoriously and empirically known to be unstable. Recently, a totally new methodology called
Composite Functional Gradient Learning (CFG) provides an alternative
theoretical foundation for training GANs more stablely by employing
a strong discriminator with logistic regression and functional gradient learning for the generator.
However, the discriminator using logistic regression from the CFG framework is gradually
hard to discriminate between real and fake images while the training steps go on.
To address this problem, our key idea and contribution are to introduce
the Wasserstein distance regularization into the CFG framework for the discriminator. This gives us a novel
improved CFG formulation with more competitive generate image quality. In particular, we provide an intuitive explanation using logistic regression with Wasserstein regularization. The method helps to enhance the model gradients
in training GANs to archives better image quality. Empirically, we compare our improved CFG with the original
version. We show that the standard CFG is easy to stick into mode collapse problem, while our improved CFG works much better
thanks to the newly added Wasserstein distance regularization. We conduct extensive
experiments for image generation on different benchmarks, and it shows
the efficacy of our improved CFG method.
One-sentence Summary: An Improved Composite Functional Gradient Learning by Wasserstein Regularization for Generative adversarial networks
4 Replies
Loading