Evaluating Implicit Generative Models With Large Samples

Feb 12, 2018 ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: We study the problem of evaluating a generative model using only a finite sample from the model. For many common evaluation functions, generalization is meaningless because trivially memorizing the training set attains a better score than the models we consider state-of-the-art. We clarify a necessary condition for an evaluation function not to behave this way: estimating the function must require a large sample from the model. In search of such a function, we turn to parametric adversarial divergences, which are defined in terms of a neural network trained to distinguish between distributions: as we make the network larger, the function is less easily minimized by memorizing the training set. We implement a reliable evaluation function based on these ideas, validate it experimentally, and show models which achieve better scores than memorizing the training set.
  • Keywords: generative modeling
  • TL;DR: Evaluating generalization in implicit generative models by considering millions of sample points rather than thousands
0 Replies