Bias Correction of Learned Generative Models via Likelihood-free Importance Weighting

Aditya Grover, Jiaming Song, Ashish Kapoor, Kenneth Tran, Alekh Agarwal, Eric Horvitz, Stefano Ermon

Mar 27, 2019 ICLR 2019 Workshop DeepGenStruct Blind Submission readers: everyone
  • Abstract: A learned generative model often gives biased statistics relative to the underlying data distribution. A standard technique to correct this bias is by importance weighting samples from the model by the likelihood ratio under the model and true distributions. When the likelihood ratio is unknown, it can be estimated by training a probabilistic classifier to distinguish samples from the two distributions. In this paper, we employ this likelihood-free importance weighting framework to correct for the bias in using state-of-the-art deep generative models.We find that this technique consistently improves standard goodness-of-fit metrics for evaluating the sample quality of state-of-the-art generative models, suggesting reduced bias. Finally, we demonstrate its utility on representative applications in a) data augmentation for classification using generative adversarial networks, and b) model-based policy evaluation using off-policy data.
0 Replies

Loading