Keywords: gradient flows, generative models, GAN, VAE, Normalizing Flow
Abstract: Deep generative modeling has seen impressive advances in recent years, to the point where it is now commonplace to see simulated samples (e.g., images) that closely resemble real-world data. However, generation quality is generally inconsistent for any given model and can vary dramatically between samples. We introduce Discriminator Gradient $f$low (DG$f$low), a new technique that improves generated samples via the gradient flow of entropy-regularized $f$-divergences between the real and the generated data distributions. The gradient flow takes the form of a non-linear Fokker-Plank equation, which can be easily simulated by sampling from the equivalent McKean-Vlasov process. By refining inferior samples, our technique avoids wasteful sample rejection used by previous methods (DRS & MH-GAN). Compared to existing works that focus on specific GAN variants, we show our refinement approach can be applied to GANs with vector-valued critics and even other deep generative models such as VAEs and Normalizing Flows. Empirical results on multiple synthetic, image, and text datasets demonstrate that DG$f$low leads to significant improvement in the quality of generated samples for a variety of generative models, outperforming the state-of-the-art Discriminator Optimal Transport (DOT) and Discriminator Driven Latent Sampling (DDLS) methods.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: A method of refining samples from deep generative models using the discriminator gradient flow of f-divergences.
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) clear-nus/DGflow](https://github.com/clear-nus/DGflow)
Data: [Billion Word Benchmark](https://paperswithcode.com/dataset/billion-word-benchmark), [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [STL-10](https://paperswithcode.com/dataset/stl-10)
15 Replies
Loading