Fed-SB: A Silver Bullet for Extreme Communication Efficiency and Performance in (Private) Federated LoRA Fine-Tuning

Published: 11 Jun 2025, Last Modified: 10 Jul 2025ES-FoMo IIIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LoRA, Low-rank adaptation, Fine-tuning, Federated fine-tuning, Foundation Models
TL;DR: We introduce Fed-SB, an efficient and scalable approach for (private) federated fine-tuning of LLMs, achieving state-of-the-art performance while drastically reducing communication costs.
Abstract: Low-Rank Adaptation (LoRA) is widely used for efficient fine-tuning, but federated settings pose challenges due to suboptimal adapter averaging. We propose **Federated Silver Bullet (Fed-SB)**, a scalable and communication-efficient method for federated fine-tuning based on LoRA-SB, which introduces a small learnable matrix $R$ between frozen adapters. By directly averaging $R$, Fed-SB enables exact aggregation and decouples communication cost from the number of clients. It achieves **state-of-the-art performance** across commonsense reasoning, arithmetic reasoning, and language inference tasks while reducing communication costs by up to **230x**. Fed-SB is especially well-suited for private settings, reducing trainable parameters and avoiding noise amplification. Our code is available at: https://github.com/CERT-Lab/fed-sb.
Submission Number: 62
Loading