Fairness-Aware Client Selection and Payment Determination for Differentially Private Federated Learning

Published: 2025, Last Modified: 21 Jan 2026IEEE Trans. Inf. Forensics Secur. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) mitigates data leakage by sharing only local machine learning models instead of raw data. However, it remains vulnerable to differential attacks. Differential Privacy (DP) addresses this concern by introducing noise to make it challenging for adversaries to reconstruct training samples. Nonetheless, clients often have varying attitudes toward data privacy, quantified by their privacy budgets. Low privacy budgets indicate the stringent privacy requirements of clients, requiring high compensations to incentivize their participation. Focusing solely on privacy budgets, however, can introduce selection bias, potentially compromising model generalization. Therefore, it is essential to emphasizes the fairness of client participation, ensuring that clients with lower privacy budgets also have opportunities to contribute to the training process. To tackle the above challenges, this paper formulates a novel DP-based incentive problem in FL, aiming to optimize the utilities of both the server and the clients. Specifically, we propose an auction mechanism that jointly selects participants based on their heterogeneous privacy budgets and determines appropriate payments. The proposed auction mechanism is proven to achieve several desirable properties, including computational efficiency, individual rationality, budget balance, truthfulness, and guaranteed optimization performance. Finally, simulation results validate the effectiveness of the proposed mechanism.
Loading