Abstract: Federated fine‑tuning of foundation models is impeded by the need to communicate billions of parameters. Low‑rank adaptation (LoRA) alleviates this by updating only compact adapter matrices. However, varying client device capabilities lead to different adapter ranks, causing rank heterogeneity that undermines aggregation, and existing reconciliation methods still incur bias or inefficiency. To address this challenge, we propose RA-LoRA, a principled rank‑aware aggregation framework that decomposes each update into rank‑wise components and aligns them using analytically derived weights. Experiments on both language models and vision transformers demonstrate consistent accuracy improvements in one‑shot and three‑shot settings.
Paper Type: Short
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: parameter-efficient-training, data-efficient training, NLP in resource-constrained settings
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: Discussed in Section 7
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Cited in Section 4 and References
B2 Discuss The License For Artifacts: Yes
B2 Elaboration: Discussed in Section 4
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: Section 4
B4 Data Contains Personally Identifying Info Or Offensive Content: Yes
B4 Elaboration: Section 4
B5 Documentation Of Artifacts: Yes
B5 Elaboration: Appendix
B6 Statistics For Data: Yes
B6 Elaboration: Section 4, Table 1
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Section 4, Appendix B
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Section 4, Appendix B
C3 Descriptive Statistics: Yes
C3 Elaboration: Table 1, 2
C4 Parameters For Packages: Yes
C4 Elaboration: Section 4 (FedIT framework)
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: No
E1 Elaboration: We used ChatGPT only for correcting gramatical errors.
Author Submission Checklist: yes
Submission Number: 675
Loading