Keywords: Transfer learning, Low Rank Adaptation, Fine tuning, Householder reflector, Orthogonal fine-tuning
Abstract: The need for parameter-efficient fine-tuning (PEFT) has emerged as large pre-trained models are increasingly employed in specialized downstream tasks. Among PEFT methods, Low-Rank Adaptation (LoRA) is widely adopted due to its ability to fine-tune models with minimal additional parameters. However, LoRA’s down-projection mechanism can lead to significant feature loss, particularly for tasks involving complex features and reasoning. This limitation poses a challenge in maintaining model performance in scenarios requiring high-dimensional representations.To address this issue, we introduce Householder Orthogonal LoRA (HoLoRA), which reparametrizes the down-projection matrix as a semi-orthogonal matrix, thereby mitigating feature loss. Our approach ensures strict orthogonality without increasing computational costs or modifying LoRA’s core components. Experimental results on the GLUE benchmark show that HoLoRA consistently outperforms standard LoRA across various tasks, particularly in low-rank settings. By preserving essential features and improving fine-tuning efficiency, HoLoRA provides a robust solution to the limitations of existing PEFT methods. This advancement enhances LoRA's applicability in complex learning environments, promoting better performance in both low-budget and high-complexity scenarios.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8634
Loading