Decentralized Personalized Federated Learning Based on a Conditional “Sparse-to-Sparser” Scheme

Qianyu Long, Qiyuan Wang, Christos Anagnostopoulos, Daning Bi

Published: 01 Jan 2025, Last Modified: 26 Jan 2026IEEE Transactions on Neural Networks and Learning SystemsEveryoneRevisionsCC BY-SA 4.0
Abstract: Decentralized federated learning (DFL) has gained popularity due to its robustness and elimination of centralized coordination requirements. In this paradigm, clients actively participate in training by exchanging models with neighboring nodes in their network. However, DFL introduces significant overhead in both training and communication costs. While existing methods focus primarily on reducing communication costs, they often overlook training efficiency and the challenges of data heterogeneity. We address these limitations by introducing DA-DPFL, a novel sparse-to-sparser training scheme that initializes with a subset of model parameters which progressively decrease during training through dynamic aggregation. This approach substantially reduces energy consumption while preserving adequate information during critical learning periods. Our experimental results demonstrate that DA-DPFL significantly outperforms DFL baselines in test accuracy while achieving up to 5x reduction in energy costs. We provide theoretical convergence analysis that validates the applicability of our approach in decentralized and personalized learning contexts. The code is available at: https://github.com/EricLoong/da-dpfl
Loading