FedPara: Low-rank Hadamard Product for Communication-Efficient Federated LearningDownload PDF

29 Sept 2021, 00:30 (edited 14 Mar 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Federated learning, Parameterization, Communication efficiency
  • Abstract: In this work, we propose a communication-efficient parameterization, $\texttt{FedPara}$, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our $\texttt{FedPara}$ method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, $\texttt{pFedPara}$, which separates parameters into global and local ones. We show that $\texttt{pFedPara}$ outperforms competing personalized FL methods with more than three times fewer parameters.
  • One-sentence Summary: New communication-efficient neural network parameterization for federated learning.
17 Replies

Loading