FLASC: Federated LoRA with Sparse Communication

TMLR Paper4919 Authors

23 May 2025 (modified: 30 May 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Low-rank adaptation (LoRA) is a promising method for finetuning models in communication-constrained settings such as cross-device federated learning (FL). Prior work has explored ways to improve the efficiency of LoRA in federated settings by imposing additional sparsity constraints. However, as we show, existing methods for sparse LoRA not only harm accuracy but can in fact increase overall communication costs. We instead propose FLASC, a simple approach with two key components: First, FLASC combines LoRA with sparse communication, which outperforms baselines such as using a lower LoRA rank or pruning LoRA weights. Second, FLASC-Search efficiently searches the space of sparsity-and-rank configurations by iteratively comparing pairs of configurations and increasing either the rank or density. Across four FL datasets, we demonstrate that FLASC outperforms existing sparse LoRA methods with up to 20% higher accuracy or 10x less communication. Our work highlights the importance of considering the constraints of existing efficient finetuning methods and provides a simple and competitive baseline for future work in federated finetuning.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Haoliang_Li2
Submission Number: 4919
Loading