Accelerating the decentralized federated learning via manipulating edges

Published: 23 Jan 2024, Last Modified: 23 May 2024TheWebConf24EveryoneRevisionsBibTeX
Keywords: decentralized federated learning, edge rewiring, complex network
TL;DR: Rewiring edges to accelerate the decentralized learning
Abstract: Federated learning enables collaborative AI training across organizations without compromising data privacy. Decentralized federated learning (DFL) improves this by offering enhanced reliability and security through peer-to-peer (P2P) model sharing. However, DFL faces challenges in terms of slow convergence rate due to the complex P2P graphs. To address this issue, we propose an efficient algorithm to accelerate DFL by introducing a limited number $k$ of edges into the P2P graphs. Specifically, we establish a connection between the convergence rate and the second smallest eigenvalue of the laplacian matrix of the P2P graph. We prove that finding the optimal set of edges to maximize this eigenvalue is an NP-complete problem. Our quantitative analysis shows the positive effect of strategic edge additions on improving this eigenvalue. Based on the analysis, we then propose an efficient algorithm to compute the best set of candidate edges to maximize the second smallest eigenvalue, and consequently the convergence rate is maximized. Our algorithm has low time complexity of $O(krn^2)$. Experimental results on diverse datasets validate the effectiveness of our proposed algorithms in accelerating DFL convergence.
Track: Systems and Infrastructure for Web, Mobile, and WoT
Submission Guidelines Scope: Yes
Submission Guidelines Blind: Yes
Submission Guidelines Format: Yes
Submission Guidelines Limit: Yes
Submission Guidelines Authorship: Yes
Student Author: No
Submission Number: 1193
Loading