LoRMA: Low Rank Multiplicative Adaptation for LLMs

ACL ARR 2025 February Submission4926 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models have emerged to show remarkable capabilities in the NLP domain. The effectiveness can mainly be attributed to the ability to adapt to an array of downstream tasks. However, generally, full fine-tuning is a computationally expensive job. To mitigate this, many techniques have been developed that prime efficiency, a prominent one being Low-Rank Adaptation (LoRA). However, LoRA and its variants employ re-parametrized additive updates. In this paper, we propose Low Rank Multiplicative Adaptation (LoRMA), which shifts the paradigm of additive updates to a much richer space of matrix multiplicative transformations. We tackle challenges such as computational complexity and rank inhibition by strategically ordering matrix operations and introducing rank inflation strategies. We conduct extensive experiments to show the effectiveness of our approach in terms of evaluation metrics and computational costs.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: PEFT, LoRA, LLMs
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 4926
Loading