DiPM: Decoupling and Recombining of Parameters in Low-Rank Adaptation for Module Ability Integration

ACL ARR 2026 January Submission1579 Authors

30 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Model, Low-Rank Adaptation, Unlearning, Multi-tasking, Transfer Learning
Abstract: Low-rank adaptation (LoRA) has proven effective for adapting large language models (LLMs) to downstream tasks using two low-rank matrices $A$ and $B$. Existing work typically treats LoRA modules as atomic units and designs different operations to integrate module abilities for complex tasks, such as linear arithmetic operations for detoxification learning and mixtures of experts for multi-task learning. Although effective, such coarse-grained operations fail to precisely identify and control distinct abilities in modules. This limits the effective integration of specific abilities and even impairs general abilities of models. Moreover, it increases the reliance on downstream data, against the intention of LoRA. To address these issues, we conduct an in-depth analysis of the LoRA learning mechanism for identifying the distinct roles of different matrices. Then, we introduce \textbf{Di}rectional \textbf{P}arameter \textbf{M}odulations framework (DiPM), which effectively integrates and flexibly controls specific abilities in modules. Specifically, we first use \textit{decoupler} to decouple parameters along direction and magnitude. Then, we develop \textit{modulator} for fine-grained module operations, which can flexibly use different operators to realize specific downstream objectives (e.g., a reversing operator for unlearning and a shifting operator for transfer). Finally, \textit{recombiner} is adopted to recombine direction and magnitude to obtain a target module with modulated abilities. Empirical results on LoRA and its variant rsLoRA across various tasks show that DiPM outperforms existing baselines in both specific ability integration and general ability preservation. We release the code to facilitate research\footnote{https://anonymous.4open.science/r/DiPM-666}.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: free-text/natural language explanations; knowledge tracing/discovering/inducing; model editing; robustness
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 1579
Loading