Improving Model Compatibility of Generative Adversarial Networks by Boundary CalibrationDownload PDF

Published: 08 Dec 2021, Last Modified: 05 May 2023DGMs and Applications @ NeurIPS 2021 PosterReaders: Everyone
Keywords: generative adversarial network, GAN, model compatibility, machine learning efficacy
TL;DR: We propose an auxiliary loss of GAN which improves the accuracy of classifiers trained on the generated data by leveraging pre-trained classifiers.
Abstract: Generative Adversarial Networks (GANs) is a powerful family of models that learn an underlying distribution to generate synthetic data. Many existing studies of GANs focus on improving the realness of the generated image data for visual applications, and few of them concern about improving the quality of the generated data for training other classifiers---a task known as the model compatibility problem. As a consequence, existing GANs often prefer generating `easier' synthetic data that are far from the boundaries of the classifiers, and refrain from generating near-boundary data, which are known to play an important roles in training the classifiers. To improve GAN in terms of model compatibility, we propose Boundary-Calibration GANs (BCGANs), which leverage the boundary information from a set of pre-trained classifiers using the original data. In particular, we introduce an auxiliary Boundary-Calibration loss (BC-loss) into the generator of GAN to match the statistics between the posterior distributions of original data and generated data with respect to the boundaries of the pre-trained classifiers. The BC-loss is provably unbiased and can be easily coupled with different GAN variants to improve their model compatibility. Experimental results demonstrate that BCGANs not only generate realistic images like original GANs but also achieves superior model compatibility than the original GANs.
1 Reply

Loading