Communication-Efficient Federated Low-Rank Update Algorithm and its Connection to Implicit Regularization

16 Apr 2026 (modified: 01 May 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Federated Learning (FL) faces significant challenges related to communication efficiency and performance reduction when scaling to many clients. To address these issues, we explore the potential of using low-rank updates and provide the first theoretical study of rank properties in FL. Our theoretical analysis shows that a client’s loss exhibits a higher-rank structure (i.e., gradients span higher-rank subspaces of the Hessian) compared to the server’s loss, and that low-rank approximations of the clients’ gradients have greater similarity. Based on this insight, we hypothesize that constraining client-side optimization to a low-rank subspace could provide an implicit regularization effect while reducing communication costs. Consequently, we propose FedLoRU, a general low-rank update framework for FL. Our framework enforces low-rank client-side updates and accumulates these updates to form a higher-rank model. We are able to establish convergence of the algorithm; the convergence rate matches FedAvg. Experimental results demonstrate that FedLoRU performs comparably to full-rank algorithms and exhibits robustness to heterogeneous and large numbers of clients.
Submission Type: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=HgRGAPPLMH
Changes Since Last Submission: The previous submission was desk-rejected due to anonymity rule. In this resubmission, we have revised the supplementary material to strictly adhere to the Action Editor’s guidance (of previous submission) regarding anonymity and code attribution. Specifically, we have updated the source code to remove personal identifiers and GitHub usernames that could potentially compromise the double-blind review process.
Assigned Action Editor: ~Sebastian_U_Stich1
Submission Number: 8478
Loading