Group Fair Federated Learning via Stochastic Kernel Regularization

Published: 23 May 2025, Last Modified: 23 May 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Ensuring \textbf{group fairness} in federated learning (FL) presents unique challenges due to data heterogeneity and communication constraints. We propose Kernel Fair Federated Learning (\texttt{KFFL}), a novel framework that incorporates group fairness into FL models using the Kernel Hilbert-Schmidt Independence Criterion (KHSIC) as a fairness regularizer. To address scalability, \texttt{KFFL} approximates KHSIC with Random Feature Maps (RFMs), significantly reducing computational and communication overhead while achieving \textit{group fairness}. To address the resulting non-convex optimization problem, we propose \texttt{FedProxGrad}, a federated proximal gradient algorithm that guarantees convergence. Through experiments on standard benchmark datasets across both IID and Non-IID settings for regression and classification tasks, \texttt{KFFL} demonstrates its ability to balance accuracy and fairness effectively, outperforming existing methods by comprehensively exploring the Pareto Frontier. Furthermore, we introduce \texttt{KFFL-TD}, a time-delayed variant that further reduces communication rounds, enhancing efficiency in decentralized environments.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Dear Action Editor, We are pleased to submit the camera ready version and summarize below the most significant changes made in response to the reviewer comments and your decision notice. --- ### 1. Multi-group Sensitive Attribute Experiments (Comment #1) To address the request for experiments involving more than two sensitive groups, we have added a new set of experiments using the Adult dataset, where we simultaneously consider both *gender* and *race* as sensitive attributes, as well as experiments where we consider the full set of 6 races as the sensitive attributes. These results are included in **Section 6.5** and demonstrate the applicability of our proposed method in multi-attribute fairness settings. We observe that KFFL continues to yield useful fairness-accuracy trade-offs under this extended setup. --- ### 2 Wider accuracy–fairness sweep (Comment #2) We have extended the sweep of the fairness hyperparameter to include larger values of λ. As λ increases, the KS distance decreases monotonically, while the RMSE remains flat for λ ≤ 1 but then rises, as expected (see **Table 4** in **Section 6.4**) for the Adult dataset. Similar trends are observed in **Table 5** for the Law School dataset. This directly addresses a reviewer's request to show a more holistic view of the Pareto frontier. --- ### 3. Added discussion of the practical implications of using KHSIC as a fairness regularizer In response to our discussion with the reviewers, we have added guidance on how to select the fairness regularizer in practice, in **Section 1**, through the use of binary search. Similarly, we have addressed the question of why a practitioner might choose to use our method given its additional communication overhead in **Section 7**. --- ### 4. Expanded discussion of FedProxGrad The comparison of FedProxGrad to prior algorithms for federated composite optimization, and the FedProx algorithm on which its analysis is based, has been expanded in **Section 3**. In particular, we have clarified the specific version of the bounded dissimilarity condition employed in our work in response to a reviewer inquiry. --- We believe these additions clarify the positioning of our results and further strengthen the empirical contributions of the paper. Thank you again for your guidance throughout the review process, and thanks to the reviewers for their helpful suggestions and comments. We look forward to your final decision.
Code: https://github.com/Huzaifa-Arif/KFFL
Assigned Action Editor: ~Han_Zhao1
Submission Number: 3833
Loading