Keywords: Bilevel optimization, Sharpness-Aware Minimization, Generative Adversarial Learning, Accuracy-efficiency Trade-off
Abstract: Adversarial learning is a widely used paradigm in machine learning, often formulated as a min-max optimization problem where the inner maximization imposes adversarial constraints to guide the outer learner toward more robust solutions. This framework underlies methods such as Sharpness-Aware Minimization (SAM) and Generative Adversarial Networks (GANs). However, traditional gradient-based approaches to such problems often face challenges in balancing accuracy and efficiency due to second-order complexities. In this paper, we propose a bilevel optimization framework that reformulates these adversarial learning problems by leveraging the tractability of the lower-level problem. The bilevel framework introduces no additional complexity and
enables the use of advanced bilevel tools. We further develop a provably convergent single-loop stochastic algorithm that effectively balances learning accuracy and computational cost.
Extensive experiments show that our method improves generation quality in terms of FID and JS scores for GANs, and consistently achieves higher accuracy for SAM under label noise and across various backbones, while promoting flatter loss landscapes.
Overall, this work provides a practical and theoretically grounded framework for solving adversarial learning tasks through bilevel optimization.
Primary Area: Optimization (e.g., convex and non-convex, stochastic, robust)
Submission Number: 16215
Loading