Abstract: Federated AUC maximization is a powerful approach for learning from imbalanced data in federated learning (FL). However, existing methods typically assume full client availability, which is rarely practical. In real-world FL systems, clients often participate in a cyclic manner—joining training according to a fixed, repeating schedule. This setting poses unique optimization challenges for the non-decomposable AUC objective.
This paper addresses these challenges by developing and analyzing communication-efficient algorithms for federated AUC maximization under cyclic client participation. We investigate two key settings:
First, we study AUC maximization with a squared surrogate loss, which reformulates the problem as a nonconvex–strongly-concave minimax optimization. By leveraging the Polyak–Łojasiewicz (PL) condition, we establish a state-of-the-art communication complexity of $\tilde{O}(1/\epsilon)$ and iteration complexity of $\tilde{O}(1/\epsilon)$.
Second, we consider general pairwise AUC losses. We establish an iteration complexity of $O(1/\epsilon^4)$ and a communication complexity of $O(1/\epsilon^3)$. Further, under the PL condition, these bounds improve to iteration complexity of $\widetilde{O}(1/\epsilon)$ and communication complexity of $\widetilde{O}(1/\epsilon^{1/2})$.
Extensive experiments on benchmark tasks in image classification, medical imaging, and fraud detection demonstrate the superior efficiency and effectiveness of our proposed methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Grigorios_Chrysos1
Submission Number: 6315
Loading