Rethinking LoRA Aggregation for Federated Fine-tuning of Foundation Models

ICLR 2026 Conference Submission17156 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Fine-tuning, Low-rank adaptation, Foundation Models
TL;DR: We explore the issues of fine-grained LoRA aggregation conflicts and aggregation noise in federated fine-tuning, and propose a solution (HFLoRA) to further enhance the performance of federated fine-tuning for foundational models.
Abstract: The application of Low-Rank Adaptation (LoRA) in Federated Learning (FL) systems provides an effective solution for Foundation Models (FMs) to leverage distributed private data. However, the heterogeneous distribution of client-side data has hindered the performance of federated systems from reaching. Through an in-depth investigation of this issue, we discover that LoRA parameter aggregation among clients gives rise to fine-grained conflicts and introduces the cross-term noise interference for subsequent rounds. Both factors disadvantage the efficient convergence of federated fine-tuning performance. Based on these findings, we propose a Harmonious Federated Low-Rank Adaption method (HFLoRA), which first detects conflicts in LoRA row update directions between clients through a fine-grained joint regulation mechanism, then imposes inhibitory constraints on anomalous conflict rows using scaling factors. In addition, we have designed a global LoRA consistent re-decomposition strategy that further mitigates the impact of cross-term noise on FL by computing a pair of optimal low-rank matrices from the aggregated noise-free global LoRA. HFLoRA is also applicable to federated environments with heterogeneous LoRA and does not introduce additional communication costs. Extensive experiments across natural language generation and vision tasks demonstrate that HFLoRA consistently outperforms other state-of-the-art FL methods on different benchmarks. Our code is available at: https://anonymous.4open.science/r/HFLoRA.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 17156
Loading