Bridging Explicit and Implicit Deep Generative Models via Neural Stein EstimatorsDownload PDF

21 May 2021, 20:44 (modified: 14 Jan 2022, 10:42)NeurIPS 2021 PosterReaders: Everyone
Keywords: deep generative models, generative adversarial networks, energy models, stein's methods
TL;DR: We propose a new joint training framework unifying explicit and implicit generative models via a neural Stein bridge
Abstract: There are two types of deep generative models: explicit and implicit. The former defines an explicit density form that allows likelihood inference; while the latter targets a flexible transformation from random noise to generated samples. While the two classes of generative models have shown great power in many applications, both of them, when used alone, suffer from respective limitations and drawbacks. To take full advantages of both models and enable mutual compensation, we propose a novel joint training framework that bridges an explicit (unnormalized) density estimator and an implicit sample generator via Stein discrepancy. We show that our method 1) induces novel mutual regularization via kernel Sobolev norm penalization and Moreau-Yosida regularization, and 2) stabilizes the training dynamics. Empirically, we demonstrate that proposed method can facilitate the density estimator to more accurately identify data modes and guide the generator to output higher-quality samples, comparing with training a single counterpart. The new approach also shows promising results when the training samples are contaminated or limited.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
28 Replies