SVDFed: Enabling Communication-Efficient Federated Learning via Singular-Value-Decomposition

Published: 01 Jan 2023, Last Modified: 06 Feb 2025INFOCOM 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) is an emerging paradigm of distributed machine learning. However, when applied to wireless network scenarios, FL usually suffers from high communication cost because clients need to transmit their updated gradients to a server in every training round. Although many gradient compression techniques like sparsification and quantization are proposed, they compress clients’ gradients independently, without considering the correlations among gradients. In this paper, we propose SVDFed, a collaborative gradient compression framework for FL. SVDFed utilizes Singular Value Decomposition (SVD) to find a few basis vectors, whose linear combination can well represent clients’ gradients at a certain round. Due to the correlations among gradients, these basis vectors can still well approximate new gradients in many subsequent rounds. With the help of basis vectors, clients only need to upload the coefficients of the linear combination to the server, which greatly reduces communication cost. In addition, SVDFed leverages the classical PID (Proportional, Integral, Derivative) control to determine the proper time to update basis vectors to maintain their representation ability. Through experiments, we demonstrate that SVDFed outperforms existing gradient compression methods in FL. For example, compared to a popular gradient quantization method QSGD, SVDFed can reduce the communication overhead by 66 % and pending time by 99 %.
Loading