Learning to Sample with Adversarially Learned Likelihood-RatioDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: We link the reverse KL divergence with adversarial learning. This insight enables learning to synthesize realistic samples in two settings: (1) Given a set of samples from the true distribution, an adversarially learned likelihood-ratio and a new entropy bound are used to learn a GAN model, that improves synthesized sample quality relative to previous GAN variants. (2) Given an unnormalized distribution, a reference-based framework is proposed to learn to draw samples, naturally yielding an adversarial scheme to amortize MCMC/SVGD samples. Experimental results show the improved performance of the derived algorithms.
TL;DR: An adversarially learned reverse KL divergence framework to synthesize samples for a set of samples or an unnormalized distribution
Keywords: reverse KL divergence, adversarial learning, amortized learning
3 Replies

Loading