LoRA Merging with SVD: Understanding Interference and Preserving Performance

Published: 01 Jul 2025, Last Modified: 01 Jul 2025ICML 2025 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: SVD, LoRA-Merging, Efficiency, Model-Merging
TL;DR: We describe a general SVD-based framework that allows LoRA to retain its shape and performance when merging
Abstract: Merging Low-Rank Adaptation (LoRA) modules is a problem gaining significance as LoRA adapters proliferate. Despite various approaches showing benchmark improvements, the field lacks clear guiding principles for effective LoRA merging. Two predominant strategies exist: direct merging (DM), which preserves a memory efficient two-matrix structure but sacrifices performance, and multiplied merging (MM), which delivers superior results but abandons the memory-efficient, low-rank architecture. In this paper, we first show that DM introduces interfering cross-terms that degrade performance, while MM exhibits linear mode connectivity in the loss landscape, making it an optimal strategy for merging. Then we demonstrate that merging with an SVD-based strategy combines MM's performance advantages with DM's memory efficiency, delivering the best of both approaches.
Submission Number: 19
Loading