FinP: Fairness-in-Privacy in Federated Learning by Addressing Disparities in Privacy Risk

ICLR 2026 Conference Submission15300 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fairness, Federated Learning, Privacy
TL;DR: This paper addresses the problem of privacy risk disparities in federated learning by balancing attack risks across all users. It protects vulnerable clients without harming the model's accuracy, ensuring no one is unfairly exposed.
Abstract: Ensuring fairness in machine learning extends to the critical dimension of privacy, particularly in human-centric federated learning (FL) settings where decentralized data necessitates an equitable distribution of privacy risk across clients. This paper introduces FinP, a novel framework specifically designed to address disparities in privacy risk by mitigating disproportionate vulnerability to source inference attacks (SIA). FinP employs a two-pronged strategy: (1) server-side adaptive aggregation, which dynamically adjusts client contributions to the global model to foster fairness, and (2) client-side regularization, which enhances the privacy robustness of individual clients. This comprehensive approach directly tackles both the symptoms and underlying causes of privacy unfairness in FL. Extensive evaluations on the Human Activity Recognition (HAR) and CIFAR-10 datasets demonstrate FinP's effectiveness, achieving improvement in fairness-in-privacy on HAR and CIFAR-10 with minimal impact on utility. FinP improved group fairness with respect to disparity in privacy risk using equal opportunity in CIFAR-10 by 57.14% compared to the state-of-the-art. Furthermore, FinP significantly mitigates SIA risks on CIFAR-10, underscoring its potential to establish fairness in privacy within FL systems without compromising utility.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15300
Loading