Random Network Distillation as a Diversity Metric for Both Image and Text GenerationDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: GAN, NLP, ImageNet, generative, diversity, VAE, CelebA, language model
Abstract: Generative models are increasingly able to produce remarkably high quality images and text. The community has developed numerous evaluation metrics for comparing generative models. However, these metrics do not effectively quantify data diversity. We develop a new diversity metric that can readily be applied to data, both synthetic and natural, of any type. Our method employs random network distillation, a technique introduced in reinforcement learning. We validate and deploy this metric on both images and text. We further explore diversity in few-shot image generation, a setting which was previously difficult to evaluate.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We present a novel, dataset agnostic framework for comparing diversity of generated data.
Reviewed Version (pdf): https://openreview.net/references/pdf?id=2zYoWpbtAF
12 Replies

Loading