Randomized DP-DFL

Weihao Zhu, Long Shi, Kang Wei, Yipeng Zhou, Zhe Wang, Zehui Xiong, Jun Li

Published: 11 Jul 2025, Last Modified: 04 Nov 2025IEEE Transactions on Mobile ComputingEveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Traditional federated learning (FL) frameworks rely on a central server for model coordination among distributed mobile terminals (MTs). The centralization faces two critical challenges, i.e., single point of failure and potential privacy leakage. Differentially private decentralized FL (DP-DFL) has been proposed to address these challenges, wherein the MTs exchange models in a decentralized manner and maintain the differential privacy (DP) guarantee by adding noise to local models before model interaction. However, existing DP-DFL frameworks confront difficulty in achieving the expected privacy and convergence performance, simultaneously. To address this issue, we propose a novel DP-DFL framework (called randomized DP-DFL) that employs a randomized model interaction scheme to lower the model exposure frequency and hence reduce privacy budget consumption. Specifically, the scheme includes two sequential steps, i.e., randomized terminal assignment and randomized model transmission. In Step 1), the model interaction phase of DFL is further divided into several sequential substages. MTs are randomly assigned to each sub-stage. In Step 2), each MT sequentially transmits either a model previously received from its neighbors or its own local model according to the assigned sub-stage order. The proposed scheme enhances the MTs' privacy of DFL since the exposure probabilities of the MTs' local models are significantly reduced via these two randomized steps. Besides, we theoretically analyze the convergence and privacy performance of randomized DP-DFL. In particular, properly tuning the number of sub-stages in randomized DP-DFL can achieve an optimal balance between privacy and convergence. Experimental results show that randomized DP-DFL consistently outperforms traditional frameworks. Compared with baselines, randomized DP-DFL reduces 40.9% privacy loss under the same target accuracy while improving 9.5% learning accuracy under the same privacy loss on EMNIST and CIFAR-10, respectively.
Loading