Enforcing fairness in private federated learning via the modified method of differential multipliersDownload PDF

Published: 04 Nov 2021, Last Modified: 15 May 2023PRIML 2021 PosterReaders: Everyone
Keywords: Private federated learning, fairness
TL;DR: In private federated learning, since there is no direct access to the data, it is hard to make the model fair, but this paper does it, via the modified method of differential multipliers.
Abstract: Federated learning with differential privacy, or private federated learning, provides a strategy to train machine learning models while respecting users’ privacy. However, differential privacy can disproportionately degrade the performance of the models on under-represented groups, as these parts of the distribution are difficult to learn in the presence of noise. Existing approaches for enforcing fairness to machine learning models have considered the centralized setting, in which the algorithm has access to the users’ data. This paper introduces an algorithm to enforce group fairness in private federated learning, where users’ data does not leave their devices. First, the paper extends the modified method of differential multipliers to empirical risk minimization with fairness constraints, thus providing an algorithm to enforce fairness in the central setting. Then, this algorithm is extended to the private federated learning setting. The proposed algorithm, FPFL, is tested on a federated version of the Adult dataset and an “unfair” version of the FEMNIST dataset. The experiments on these datasets show how private federated learning accentuates unfairness in the trained models, and how FPFL is able to mitigate such unfairness.
Paper Under Submission: The paper is NOT under submission at NeurIPS
1 Reply

Loading