When Does LoRA Reuse Work? Theoretical Limits and Mechanisms for Recycling LoRAs Without Data Access

Published: 14 Feb 2026, Last Modified: 14 Feb 2026Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Reusing low-rank adapters (LoRAs) by merging or routing is a common strategy for adapting large language models to new tasks, especially when training data is unavailable but many fine-tuned LoRAs are accessible. While the availability of publicly shared LoRA weights has inspired new algorithms for composing them to solve new tasks, recent findings highlight limitations in LoRA’s ability to integrate new knowledge. This work investigates when LoRA reuse can be successful for compositional factual and reasoning tasks. Through theoretical analysis in a simplified setup and experiments on a controlled synthetic two-hop reasoning task with extensions to math word problems, cross-lingual code generation, and history/geography QA, we show that data-agnostic methods, such as parameter averaging and dynamic selection, often fail to combine knowledge from logically disjoint fine-tuning datasets. This challenge is particularly pronounced when the relevant knowledge is underrepresented during pretraining. However, reuse can succeed when fine-tuning datasets share solution templates, such as reasoning patterns or reusable code, which serve as bridges among tasks. Our results suggest that LoRA reuse relies more on shallow pattern matching than on logical integration of existing knowledge. This mechanism-based perspective offers practical guidance for curating datasets and designing systems that enable LoRA reuse to overcome data-access limitations. Findings indicate that future research should focus on the mechanisms enabling effective adapter reuse rather than solely on developing new reuse algorithms.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Dear TMLR review committee, In the camera-ready version, we have added the new experimental results from the rebuttal, improved readability, and corrected typos. Thank you for the constructive feedback, and we look forward to the publication.
Assigned Action Editor: ~Ankit_Singh_Rawat1
Submission Number: 6274
Loading