Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Quantitatively Evaluating GANs With Divergences Proposed for Training
Nov 07, 2017 (modified: Nov 07, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:Generative adversarial networks (GANs) have made enormous progress in terms of theory and application in the machine learning and computer vision communities. Even though substantial progress has been made, a lack of quantitative model assessment is still a serious issue. This has led to a huge number of GAN variants being proposed, with relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics, and interestingly, the test-time metrics do not favour networks that use the same training-time criterion.
TL;DR:An empirical evaluation on generative adversarial networks
Keywords:Generative adversarial networks
Enter your feedback below and we'll get back to you as soon as possible.