Single-loop Algorithms for Stochastic Non-Convex Optimization with Weakly-Convex Constraints

TMLR Paper6253 Authors

19 Oct 2025 (modified: 03 Dec 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Constrained optimization with multiple functional inequality constraints has significant applications in machine learning. This paper examines a crucial subset of such problems where both the objective and constraint functions are weakly convex. Existing methods often face limitations, including slow convergence rates or reliance on double-loop algorithmic designs. To overcome these challenges, we introduce a novel single-loop penalty-based stochastic algorithm. Following the classical exact penalty method, our approach employs a hinge-based penalty, which permits the use of a constant penalty parameter, enabling us to achieve a state-of-the-art complexity for finding an approximate Karush-Kuhn-Tucker (KKT) solution. We further extend our algorithm to address finite-sum coupled compositional objectives, which are prevalent in artificial intelligence applications, establishing improved complexity over existing approaches. Finally, we validate our method through experiments on fair learning with receiver operating characteristic (ROC) fairness constraints and continual learning with non-forgetting constraints.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Jiawei_Zhang6
Submission Number: 6253
Loading