Beyond Convexity: Proximal-Perturbed Lagrangian Methods for Efficient Functional Constrained Optimization

TMLR Paper6302 Authors

24 Oct 2025 (modified: 03 Nov 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Non-convex functional constrained optimization problems have gained substantial attention in machine learning and data science, addressing broad requirements that typically go beyond the often performance-centric objectives. An influential class of algorithms for functional constrained problems is the class of primal-dual methods which has been extensively analyzed for convex problems. Nonetheless, the investigation of their efficacy for non-convex problems is under-explored. This paper develops a primal-dual algorithmic framework for solving such non-convex problems. This framework is built upon a novel form of the Lagrangian function, termed the {\em Proximal-Perturbed Augmented Lagrangian}, which enables the development of simple first-order algorithms that converge to a stationary solution under mild conditions. Notably, we study this framework under both non-smoothness and smoothness of the constraint function and provide three key contributions: (i) a single-loop algorithm that does not require the continuous adjustment of the penalty parameter to infinity; (ii) a non-asymptotic iteration complexity of $\widetilde{\mathcal{O}}(1/\epsilon^2)$; and (iii) extensive experimental results demonstrating the effectiveness of the proposed framework in terms of computational cost and performance, outperforming related approaches that use regularization (penalization) techniques and/or standard Lagrangian relaxation across diverse non-convex problems.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Jiawei_Zhang6
Submission Number: 6302
Loading