Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Improving diversity in Generative adversarial networks by encouraging discriminator representation entropy
Nov 07, 2017 (modified: Nov 07, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:We propose a novel regularizer for the training of Generative Adversarial Networks (GANs) so that the generator can better capture the variations within the real data distribution, thus helps to avoid subtle model collapse and improve the performance of GANs. The idea is to encourage the discriminator $D$ to provides more informative signals for the learning of the generator $G$ by allocating the model capacity of $D$ in a more desirable way. In particular, we measure the model capacity of $D$ by its activation patterns, and our new regularizer is constructed to encourage a high joint entropy of the activation patterns of the hidden layers of $D$. Experimental results on both synthetic data and real datasets show that our regularizer helps to improve the sample quality in the unsupervised learning setting, and also the classification accuracy in the semi-supervised learning setting.
Enter your feedback below and we'll get back to you as soon as possible.