Keywords: Low-Rank Adaptation, LoRA, Subspace Methods, Parameter-Efficient Fine-Tuning
TL;DR: Dynamic BB-OPLoRA improves the stability-plasticity trade-off in low-rank adaptation by preserving dominant pretrained directions in a rigid core while using a gradient-based pressure signal to guide adaptation in a flexible spectral border.
Abstract: Low-Rank Adaptation (LoRA) enables efficient fine-tuning of large language models
through compact additive updates, but these updates can still interfere with
pretrained directions that support broad model capabilities, causing catastrophic
forgetting. The recent Orthogonal-Projection LoRA (OPLoRA) method protects dominant pretrained directions,
but its rigid preservation rule can limit adaptation and hinder optimization
near the spectral boundary. To address this, we introduce Dynamic Budgeted-Border Orthogonal-Projection LoRA (Dynamic BB-OPLoRA), a subspace-aware LoRA that replaces a strict preservation rule with a rigid-core/flexible-border decomposition of the top singular subspace of each pretrained weight matrix. The rigid core preserves the most dominant pretrained directions, while the flexible border allows controlled task-specific adaptation near the spectral boundary without destabilizing the core. The border update is governed by a stiffness-weighted budget that is dynamically adjusted using a gradient-derived pressure signal. We evaluate the method on commonsense reasoning, mathematical reasoning, and Python code generation. Experiments show that Dynamic BB-OPLoRA improves the stability--plasticity trade-off over LoRA and OPLoRA, achieving stronger adaptation while maintaining resistance to cross-domain forgetting.
Submission Number: 67
Loading