Keywords: Distributed learning, privacy protection, decentralized stochastic gradient
TL;DR: This article presents a privacy-preserving approach for decentralized learning, designed to safeguard both network estimates and local data.
Abstract: In collaborative learning systems, significant effort has been devoted to protecting the privacy of each agent’s local data and gradients. However, the shared model parameters themselves can also reveal sensitive information about the targets the network is estimating. To address both risks, we propose a dual-protection framework for decentralized learning. Within this framework, we develop two privacy-preserving algorithms, named DSG-RMS and EDSG-RMS. Different from existing privacy distributed learning methods, these algorithms simultaneously obscure the network’s estimated values and local gradients. They do this by adding a protective perturbation vector at each update and by using randomized matrix-step-sizes. Then, we establish their convergence guarantees under convex objectives, and derive error bounds that also explicitly account for the influence of network topology. In particular, our analysis highlights how the spectral gap of the mixing matrix and the variance of the randomized matrix-step-sizes affect algorithm performance. Finally, we validate the practical effectiveness of the proposed algorithms through extensive experiments across diverse applications, including distributed filtering, distributed learning, and target localization.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 17421
Loading