Abstract: Generative Adversarial Networks (GANs) are efficient generative models but may suffer from mode mixture and mode collapse. We present an original global characterization of GAN training by dividing it into three successive phases — fitting, refining, and collapsing. Such a characterization underscores a strong correlation between mode mixture and the refining phase, as well as mode collapse and the collapsing phase. To analyze the causes and features of each phase, we propose a novel theoretical framework that integrates both continuous and discrete aspects of GANs, addressing a gap in existing literature that predominantly focuses on only one aspect. We develop a specialized metric to detect the phase transition from refining to collapsing and integrate it in an "early stopping" algorithm to optimize GAN training. Experiments on synthetic datasets and real-world datasets including MNIST, Fashion MNIST and CIFAR-10 substantiate our theoretical insights and highlight the efficacy of our algorithm.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: # Changes Since Last Submission (Highlighted in Green)
- Added comparison between our early stopping metric and "duality gaps" in Appendix G.8, with corresponding modifications to expressions in Section 6.2.
- Added several references.
# Changes Since Second Submission (Highlighted in Blue)
* **Reorganized Section 4: The Second Phase of GAN Training — Refining**
- Clarified the definition of "refining" at the beginning to prevent potential misunderstandings.
- Provided additional explanations regarding the motivation behind steepness.
- Updated Figure 3 to illustrate more directly how steepness impacts the severity of mode mixture; relocated the original Figure 3 and the associated Table 2 to the appendices.
- Added Theorem 4.1, which establishes a lower bound for steepness in measure-preserving maps for a general mixture of Gaussians.
- Added Theorem 4.4, presenting quantitative results that demonstrate how the steepness of generator functions influences the severity of mode mixture.
* **Relocated the Derivation of the Particle Evolution Field**
- Moved the derivation from its original position in Section 3.1 to Section 2.2, emphasizing its role within our framework.
* **Corrected Minor Errors**
- Modified some expressions for clarity and fixed some typos.
# Changes Since First Submission (Highlighted in Red)
- Added Uniform Manifold Approximation and Projection (UMAP) plots of MNIST dataset to Figure 1, illustrating how image distributions are *analogous* to Gaussian mixtures, with details such as the effects of different initialization methods provided in the appendices.
- Added analyses on how a class of *suboptimal* discriminators affects the vector field that updates particles (complementing Section 3) and the evolution of steepness (complementing Section 4).
- Added *data-dependent theoretical results* to Sections 3.1 and 3.2 (previously Sections 3.2 and 3.3). Moved the original Theorem 3.1 and its implications to the appendices, leaving only the conclusions in the main text for brevity.
- Modified Algorithm 1 and Theorem 2.1 to emphasize that the *stop gradient operator* is applied to \(\hat{Z}_i\)'s.
- Added Table 2 to summarize the differences in Figure 3.
- Added several references.
- Fixed some typos.
Assigned Action Editor: ~Michael_U._Gutmann1
Submission Number: 2643
Loading