Keywords: Parameter-Efficient Fine-tuning, Low-Rank Adaptation, Adaptive Nonlinear Modulation
Abstract: Low-Rank Adaptation (LoRA) has emerged as an efficient fine-tuning paradigm for large models by injecting trainable low-rank updates into frozen weights. However, the inherent linearity and low-rank nature of LoRA restrict its capacity to capture complex nonlinear semantics. Recent work demonstrates that applying nonlinear functions such as sine to the low-rank component can significantly enhance its expressiveness. Despite this, these methods typically rely on a static frequency, failing to accommodate the input-dependent variations in optimal perturbation scale. In this paper, we propose AdaSine-LoRA, a novel framework that integrates adaptive frequency modulation into the sine-activated LoRA formulation. Instead of using a fixed global frequency, our method dynamically generates a frequency coefficient conditioned on the input, enabling input-aware control over the perturbation pattern. We analyze the relationship between frequency and the effective rank of the perturbed weight space, and empirically demonstrate that adaptive frequency leads to consistently improved performance across diverse tasks with minimal parameter overhead. Our approach provides a lightweight yet effective mechanism to enhance LoRA's expressivity by aligning perturbation dynamics with task-specific input structures. Our extensive experiments demonstrate that this design consistently outperforms LoRA and its variants across a wide range of downstream tasks, including large language model fine-tuning and visual instruction tuning.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 1238
Loading