A Fast Federated Method for Minimax Problems with Sequential Convergence Guarantees

26 Sept 2024 (modified: 11 Dec 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: federated learning, minimax optimization
Abstract: Federated learning (FL) has recently been actively studied to collaboratively train machine learning models across clients without directly sharing data and to address data-hungry issues. Many FL works have been focusing on minimizing a loss function but many important machine learning tasks such as adversarial training, GANs, fairness learning, and AUROC maximization are formulated as minimax problems. In this paper, we propose a new federated learning method for minimax problems. Our method allows client drift and addresses the data heterogeneity issue. In theoretical analysis, we prove that our method can improve sample complexity and has convergence guarantees for the updates of the model parameters, i.e., the sequences generated by the method. Given the Kurdyka-Łojasiewicz (KL) exponent of a novel potential function related to the objective function, we demonstrate that the sequences generated by our method converge finitely, linearly, or sublinearly. Our assumptions on the KL property are weaker than previous work on the sequential convergence of centralized minimax methods. Additionally, we further weaken the KL assumption by deducing the KL exponent of the potential function from that of the original objective function. We validate our federated learning method on AUC maximization tasks. The experimental results demonstrate that our method outperforms state-of-the-art federated learning methods when the distributions of local training data are non-IID.
Supplementary Material: zip
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8120
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview