FedLRS: A Communication-Efficient Federated Learning Framework With Low-Rank and Sparse Decomposition

Published: 01 Jan 2023, Last Modified: 05 Mar 2025ICPADS 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) has emerged as a revolutionary paradigm within the domain of machine learning, offering the capability to facilitate collaborative model training across a network of decentralized devices while meticulously upholding the tenets of data privacy. Nevertheless, a prominent challenge prevails in the realm of safeguarding the commendable performance of the model, while optimizing communication efficiency as well as the computational cost. This paper tackles this challenge by introducing an innovative framework for federated learning that harmoniously integrates principles from both Low-Rank and Sparse decomposition, denoted as FedLRS. We treat low-rank layers as discrete neural network layers and utilize decomposed weights to represent sparse features. This design enabling training without the requirement for reconstructing the original parameters. To enhance stability and accelerate convergence speed, we employ Spectral Initialization and Intermediate Normalization techniques. Furthermore, we impose a sparse constraint grounded in Reconstruction-Based principles to effectively harness sparsity. Through meticulous experimentation and analysis conducted on benchmark datasets such as CIFAR10 and CIFAR100, the results prominently demonstrate that FedLRS not only achieves remarkable performance but also showcases efficiency in terms of communication and computation costs, consequently outperforming existing methodologies in the field.
Loading