LoRA-DV: Spectral Rethinking for Reducing Task Interference via Difference Vector in Model Merging

ACL ARR 2026 January Submission5568 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Model Merging, Multi-task Learning, Transfer Learning, Language Models
Abstract: Model merging integrates multi-task capabilities into Large Language Models without retraining, with spectral space offering better disentanglement of task interference than traditional parameter-space methods. However, current spectral approaches rely on static and one-off operations like truncation, which lack the granularity to resolve residual feature conflicts, resulting in a suboptimal merged model with unsatisfied multi-task performances. To bridge this gap, we propose LoRA-DV, a universal post-hoc framework that continuously refines merged weights via a Spectral Rethinking Mechanism. By employing iterative anisotropic scaling to modulate Difference Vectors (DVs)---defined as parameter displacements from the pre-trained state that encapsulate historical optimization knowledge---LoRA-DV acts as a high-precision spectral equalizer to suppress noise and amplify task signals with minimal learnable parameters. Experiments show that LoRA-DV significantly enhances existing baselines, effectively reducing spectral interference and boosting multi-task performance through fine-grained calibration.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: multi-task learning, transfer learning
Languages Studied: English
Submission Number: 5568
Loading