Privacy-Preserving Personalized Decentralized Learning With Fast Convergence

Published: 2024, Last Modified: 05 Jan 2026IEEE Trans. Consumer Electron. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Personalized decentralized learning aims to train individual personalized models for each client to adapt to Non-IID data distributions and heterogeneous environments. However, the distributed nature of decentralized learning is insufficient for protecting client training data from gradient leakage danger. In this paper, we investigate a privacy-preserving personalized decentralized learning optimization mechanism instead of traditional SGD. We design the P2DL mechanism to optimize our proposed objective function, whereby adjusting the regularization term parameter for a resilient local-global trade-off. Instead of exchanging gradients or models, auxiliary variables with knowledge can be transferred among clients to avoid model inversion and reconstruction attacks. We also provide theoretical convergence guarantees for both synchronous and asynchronous settings. Particularly, in case of synchronous communication, its convergence rate $\mathcal {O}\left ({{{}\frac {1}{k}}}\right)$ matches with the optimal result in decentralized learning, where k is the number of communication rounds. Extensive experiments are conducted to verify the effectiveness of newly proposed P2DL comparing with the state of the arts.
Loading