Federated Dynamical Low-Rank Training with Global Loss Convergence Guarantees

Published: 01 Oct 2024, Last Modified: 17 Oct 2024FL@FM-NeurIPS'24 OralEveryoneRevisionsBibTeXCC0 1.0
Keywords: Federated Learning, Low-Rank, Communication Efficient
Abstract: In this work, we propose a federated dynamical low-rank training (FeDLRT) scheme to reduce client compute and communication costs - two significant performance bottlenecks in horizontal federated learning. Our method builds upon dynamical low-rank splitting schemes for manifold-constrained optimization to create a global low-rank basis of network weights, which enables client training on a small coefficient matrix. A consistent global low-rank basis allows us to incorporate a variance correction scheme and prove global loss descent and convergence to a stationary point. Dynamic augmentation and truncation of the low-rank bases automatically optimizes computing and communication resource utilization. We demonstrate the efficiency of FeDLRT in an array of computer vision benchmarks and show a reduction of client compute and communication costs by up to an order of magnitude with minimal impacts on global accuracy.
Submission Number: 50
Loading