Game Analysis and Incentive Mechanism Design for Differentially Private Cross-Silo Federated Learning
Abstract: Cross-silo federated learning (FL) is a distributed learning method where clients collaboratively train a global model without exchanging local data. However, recent works reveal that potential privacy leakage occurs when clients upload their local updates. Although some works have studied privacy-preserving mechanisms in FL, the selfish privacy-preserving behaviors of clients are yet to be explored. In this paper, we formulate clients’ privacy-preserving behaviors in cross-silo FL as a multi-stage privacy preservation game, where each stage game corresponds to one training iteration. Specifically, clients selfishly perturb their local updates in each training iteration to trade off between convergence performance and privacy loss. To analyze the game, we first derive a novel theoretical bound to characterize the impact of clients’ local perturbations on the convergence of FL through analyzing the corrective effect of gradient descent in model training. With the novel convergence bound, we prove that the multi-stage privacy preservation game admits a unique subgame perfect Nash equilibrium (SPNE). We show that at the SPNE, the magnitude of each client's local perturbation decreases geometrically with training iterations. We then show that the efficiency decreases with the number of clients in some cases. To tackle this problem, we propose a socially efficient incentive mechanism that guarantees individual rationality, budget balance, and social efficiency. We further propose a truthful mechanism that achieves approximate social efficiency. Simulation results show that our proposed mechanisms can decrease clients’ total cost by up to 58.08% compared with that at the SPNE.
Loading