Keywords: Generative Adversarial Networks, Uncertainty Quantification, Computer Vision, Deep Generative Models, Deep Learning
TL;DR: Enabling more diverse generations through uncertainty aware generative models
Abstract: Generative models, particularly Generative Adversarial Networks (GANs), often suffer from a lack of output diversity, frequently generating similar samples rather than a wide range of variations. This paper introduces a novel generalization of the GAN loss function based on Dempster-Shafer theory of evidence, applied to both the generator and discriminator. Additionally, we propose an architectural enhancement to the generator that enables it to predict a mass function for each image pixel. This modification allows the model to quantify uncertainty in its outputs and leverage this uncertainty to produce more diverse and representative generations. Experimental evidence shows that our approach not only improves generation variability but also provides a principled framework for modeling and interpreting uncertainty in generative processes.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 19859
Loading