Federated Learning with Convex Global and Local Constraints

TMLR Paper2272 Authors

20 Feb 2024 (modified: 01 May 2024)Decision pending for TMLREveryoneRevisionsBibTeX
Abstract: In practice, many machine learning (ML) problems come with constraints, and their applied domains involve distributed sensitive data that cannot be shared with others, e.g., in healthcare. Collaborative learning in such practical scenarios entails federated learning (FL) for ML problems with constraints, or FL with constraints for short. Despite the extensive developments of FL techniques in recent years, these techniques only deal with unconstrained FL problems or FL problems with simple constraints that are amenable to easy projections. There is little work dealing with FL problems with general constraints. To fill this gap, we take the first step toward building an algorithmic framework for solving FL problems with general constraints. In particular, we propose a new FL algorithm for constrained ML problems based on the proximal augmented Lagrangian (AL) method. %The subproblems of our proposed algorithm are solved by an inexact alternating direction method of multipliers (ADMM). Assuming convex objective and convex constraints plus other mild conditions, we establish the worst-case complexity of the proposed algorithm. Our numerical experiments show the effectiveness of our algorithm in performing Neyman-Pearson classification and fairness-aware learning with nonconvex constraints, in an FL setting.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/PL97/Constr_FL
Assigned Action Editor: ~Alain_Durmus1
Submission Number: 2272
Loading