Understanding and Improving the Training of Data-Efficient GANs

TMLR Paper3301 Authors

06 Sept 2024 (modified: 21 Nov 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recently, many studies have highlighted that training data-efficient generative adversarial networks (DE-GANs) suffers from the overfitting of the discriminator ($D$) problem. However, how this issue theoretically influences the training of the generator ($G$) remains unclear. In this paper, we unveil a novel insight in DE-GANs training, regarding the significance of useful gradients of $G$. Notably, the useful gradients of $G$ are those computed on generated samples closer to the real data distribution, which helps the generator to better learn the real data distribution. As the overfitting degree of $D$ increases, the gradients of the $G$ become progressively less useful, which hinders DE-GANs training. Based on this novel insight, we propose a simple yet effective general training strategy for DE-GANs to provide more useful gradients for $G$ by selecting fake samples to update $G$ during training, namely adaptive Top-k (ATop-k). Concretely, only the Top-k high-score fake samples are used to update $G$, where the value of k is controlled by the overfitting degree of $D$ adaptively. Extensive experiments on several datasets demonstrate that ATop-k can effectively improve the training of DE-GANs and achieve better performance with different DE-GANs.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Colin_Raffel1
Submission Number: 3301
Loading