Bias Correction of Learned Generative Models via Likelihood-free Importance WeightingDownload PDF

Aditya Grover, Jiaming Song, Ashish Kapoor, Kenneth Tran, Alekh Agarwal, Eric Horvitz, Stefano Ermon

Published: 03 May 2019, Last Modified: 05 May 2023DeepGenStruct 2019Readers: Everyone
Abstract: A learned generative model often gives biased statistics relative to the underlying data distribution. A standard technique to correct this bias is by importance weighting samples from the model by the likelihood ratio under the model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. In this paper, we employ this likelihood-free importance weighting framework to correct for the bias in using state-of-the-art deep generative models.We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art generative models, suggesting reduced bias. Finally, we demonstrate its utility on representative applications in a) data augmentation for classification using generative adversarial networks, and b) model-based policy evaluation using off-policy data.
3 Replies

Loading