Federated Dynamical Low-Rank Training with Global Loss Convergence Guarantees

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Low-Rank, Model Compression, Efficient Federated Learning
Abstract: We propose a federated dynamical low-rank training (FeDLRT) scheme to reduce client compute and communication costs - two significant performance bottlenecks in horizontal federated learning. Our method builds upon dynamical low-rank splitting schemes for manifold-constrained optimization to create a global low-rank basis of network weights, which enables client training on a small coefficient matrix. This global low-rank basis that allows us to incorporate a variance correction scheme and prove global loss descent and convergence to a stationary point. FeDLRT features dynamic augmentation and truncation of the low-rank bases to optimize computing and communication resource utilization. Notably FeDLRT only trains a small coefficient matrix per client. We demonstrate the efficiency of FeDLRT in an array of computer vision benchmarks with both i.i.d. and non-i.i.d. data distributions and show a reduction of client compute and communication costs by up to an order of magnitude with minimal impacts on global accuracy. FeDLRT performs as well as classical methods such as FedAvg and FedLin, with a fraction of the memory and compute requirements.
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11197
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview