Alternating Aggregation Low-Rank Adaptation Approach for Federated Large Models

Published: 2025, Last Modified: 07 Jan 2026ADMA (1) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Low-rank adaptation (LoRA) is a parameter-efficient fine-tuning (PEFT) method for pre-trained large language models tailored to specific downstream tasks. Due to its flexibility and computational efficiency, LoRA has become the preferred approach within the PEFT framework. However, when applied to federated learning environments, LoRA often experiences instability. This instability primarily arises from two factors: (1) the direct integration of the traditional federated averaging algorithm with the LoRA adapter, which can cause errors in parameter updates during model aggregation, and (2) the amplification of noise introduced to satisfy differential privacy requirements. To address these challenges, this paper introduces an innovative Alternating Federated Low-Rank Adaptation (AF-LoRA) method. AF-LoRA enhances model training stability by implementing an alternating upload and aggregation mechanism for matrices \(\textbf{A}\) and \(\textbf{B}\), while maintaining only 50% of the communication cost of the standard LoRA method. Specifically, in each communication round, only one matrix is uploaded for global aggregation, while the other matrix is locally optimized for the next update. Extensive experimental results demonstrate that, compared to traditional federated LoRA methods, AF-LoRA significantly outperforms in both standard and privacy-preserving federated learning scenarios.
Loading