Nested Annealed Training Scheme for Generative Adversarial Networks

Published: 01 Jan 2025, Last Modified: 17 Apr 2025IEEE Trans. Circuits Syst. Video Technol. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, researchers have proposed many deep generative models, including generative adversarial networks (GANs) and denoising diffusion models. Although significant breakthroughs have been made and empirical success has been achieved with the GAN, its mathematical underpinnings remain relatively unknown. This paper focuses on a rigorous mathematical theoretical framework: the composite-functional-gradient GAN (CFG). Specifically, we reveal the theoretical connection between the CFG model and score-based models. We find that the CFG discriminator’s training objective is equivalent to finding an optimal $D(\mathrm {x})$ . The optimal $D(\mathrm {x})$ ’s gradient differentiates the integral of the differences between the score functions of real and synthesized samples. Conversely, training the CFG generator involves finding an optimal $G(\mathrm {x})$ that minimizes this difference. In this paper, we aim to derive an annealed weight preceding the CFG discriminator’s weight. This new explicit theoretical explanation model is called the annealed CFG method. To overcome the annealed CFG method’s limitation, as the method is not readily applicable to the state-of-the-art (SOTA) GAN model, we propose a nested annealed training scheme (NATS). This scheme keeps the annealed weight from the CFG method and can be seamlessly adapted to various GAN models, no matter their structural, loss, or regularization differences. We conduct thorough experimental evaluations on various benchmark datasets for image generation. The results show that our annealed CFG and NATS methods significantly improve the synthesized samples’ quality and diversity. This improvement is clear when comparing the CFG method and the SOTA GAN models.
Loading