Privacy via Scheduling and Connectivity Design in Decentralized Federated Learning

Published: 24 Sept 2025, Last Modified: 18 Nov 2025AI4NextG @ NeurIPS 25 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: decentralized federated learning; privacy; network topology design; scheduling
Abstract: In gradient-based distributed learning, inference attacks typically encounter significant challenges in reconstructing from combined gradients from a large number of samples that is tens or hundreds of times more than the output size. We notice that such combinations can be achieved by designing specific network topologies with limited number of connections and scheduling clients into sequentially activated subsets in decentralized federated-learning (FL). With this motivation, we propose a novel network topology and connectivity design that optimizes the trade-off between the training performance and privacy protection. We analytically demonstrate the convergence of the proposed decentralized learning, and quantify the privacy leakage via the entropy from the adversary's perspective. Furthermore, we show example topologies that effectively address the trade-off between eliminating privacy leakage and ensuring training convergence. Finally, we validate the performance of our cyclic topology against traditional FL.
Submission Number: 82
Loading