On the Convergence of Federated Deep AUC MaximizationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: In many real-world applications, the distribution of data is skewed. The standard models, which are designed to optimize the accuracy, have poor prediction performance when they are applied to imbalanced data tasks because the model could be dramatically biased toward its major class. Therefore, areas under ROC curves (AUROC) was proposed as a useful metric to assess how well prediction models performed on unbalanced data sets. On the other hand, federated learning (FL) has attracted increasing attention with the emergence of distributed data due to its communication efficiency. To address the challenge of distributed imbalanced data, research on Federated Deep AUC Maximization (FDAM) is necessary. However, the FDAM problem currently is understudied and is more complex than traditional federated learning (FL) techniques since its minimization objective is non-decomposable over individual examples. In this study, we solve FDAM algorithms for heterogeneous data by reformulating it as the popular non-convex strongly-concave min-max formulation and propose the federated stochastic recursive momentum gradient ascent (FMGDA) algorithm , which can also be applied to general federated non-convex-strongly-concave minimax problems. Importantly, our method does not rely on strict assumptions, such as the PL condition and we proved that it can achieve the $O(\epsilon^{-3})$ sample complexity, which reaches the best-known sample complexity of centralized methods. It also achieves the $O(\epsilon^{-2})$ communication complexity and a linear speedup in terms of the number of clients. Additionally, extensive experimental results show that our algorithm (i.e. FMGDA) performs empirically superior to other algorithms, supporting its effectiveness.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Optimization (eg, convex and non-convex optimization)
9 Replies

Loading