Keywords: Distributed learning, privacy protection, decentralized stochastic gradient
TL;DR: This article presents a privacy-preserving approach for decentralized learning, designed to safeguard both network estimates and local data.
Abstract: In decentralized learning systems, significant effort has been devoted to protecting the privacy of each agent’s local data or gradients. However, the shared model parameters themselves can also reveal sensitive information about the targets, which the network is estimating. While differential privacy-based decentralized learning can protect network estimates, using excessively large privacy noise variance will significantly reduce the accuracy of network estimates. To this end, we propose a dual-protection framework for decentralized learning. Within this framework, we develop two privacy-preserving algorithms, named DSG-RMS and EDSG-RMS. Different from existing differential privacy distributed learning methods, the designed algorithms simultaneously obscure the network’s estimated values and local gradients, by adding a protective perturbation vector at each update and by using random matrix-step-sizes. Then, we establish convergence guarantees for both algorithms under convex objectives. In particular, our error bound and privacy analysis highlight how the variance of the random matrix-step-sizes affects both algorithmic performance and the privacy of local gradients. Despite using large-variance random step-sizes for stronger gradient privacy, the network’s estimation accuracy in our algorithms can still be improved by choosing a sufficiently small algorithmic parameter $\gamma$. Finally, we validate the practical effectiveness of the proposed algorithms through extensive experiments across diverse applications, including distributed filtering, distributed learning, and target localization.
Supplementary Material: pdf
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 17421
Loading