Keywords: Generative Adversarial Network, Zephyr loss, Adversarial Training, Flexible Gradient Control.
TL;DR: A novel loss function for generative adversarial training with flexible gradient control.
Abstract: Generative adversarial networks (GANs) are renowned for their ability to generate highly realistic and diverse data samples. However, the performance of GANs is heavily dependent on the choice of loss functions, and commonly used losses such as cross-entropy and least squares are often susceptible to outliers, vanishing gradients, and training instability. To overcome these limitations, we introduce zephyr loss—a novel, convex, smooth, and Lipschitz continuous loss function designed to enhance robustness and provide flexible gradient control. Leveraging this new loss function, we propose ZGAN, a refined GAN model that guarantees a unique optimal discriminator and stabilizes the overall training dynamics. Furthermore, we demonstrate that optimizing ZGAN's generator objective minimizes a weighted total variation between the real and generated data distributions. Through rigorous theoretical analysis, including convergence proofs, we substantiate the robustness and effectiveness of ZGAN, positioning it as a compelling and reliable alternative for stable GAN training. Extensive experiments further demonstrate that ZGAN surpasses leading methods in generative modeling.
Supplementary Material: zip
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13743
Loading