Tied-LoRA: Enhancing parameter efficiency of LoRA with Weight TyingDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: Exploration of LoRA but with weight tying and selective training is a effective way to futher improve LoRA's parameter efficiency.
Abstract: We introduce \tiedlora, a novel paradigm leveraging weight tying and selective training to enhance the parameter efficiency of Low-rank Adaptation (LoRA). Our exploration encompasses all plausible combinations of parameter training and freezing, coupled with weight tying, aimed at identifying the optimal trade-off between performance and the count of trainable parameters. Across $5$ diverse tasks and two foundational language models, our experiments provide comprehensive insights into the inherent trade-offs between efficiency and performance. Our findings reveal a specific \tiedlora~configuration that distinguishes itself by showcasing comparable performance to LoRA across multiple tasks while utilizing only a fraction of the parameters employed by the standard LoRA method, particularly at elevated ranks. This underscores the efficacy of \tiedlora~in achieving impressive results with significantly reduced model complexity.
Paper Type: long
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency, Data analysis
Languages Studied: English and German (German was used in a translation task)
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview