Routing-Deconstructed LoRA in Federated Fine-Tuning

04 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs, Parameter-Efficient Fine-Tuning, Federated Learning, LoRA, Resource Heterogeneous
Abstract: The integration of Large Language Models (LLMs) with Federated Learning (FL) offers a promising approach to privacy-preserving Parameter-Efficient Fine-Tuning (PEFT). However, resource and data heterogeneity in FL cause differences in local knowledge distribution across clients. As a representative PEFT approach, LoRA still faces three key challenges in such settings: aggregation noise, knowledge contamination, and aggregation distortion. To address these issues, we propose Routing-Deconstructed LoRA (RD-LoRA). Building on an alternating freezing strategy to mitigate aggregation noise and concurrently reduces communication cost, RD-LoRA further introduces two novel components. For knowledge contamination, we design a Server-Client Routing Deconstructor (SCRD) that separates shared semantics from local biases, retaining fine-grained knowledge with semantic consistency. To address aggregation distortion, we propose a Poly-Consensus Aggregation (PCA) mechanism that uses adaptive weighted averaging to align global LoRA parameters with heterogeneous client distributions, thus correcting the global update direction. Extensive experiments demonstrate that RD-LoRA is effective and robust in both homogeneous and heterogeneous settings.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 2021
Loading