Abstract: Low-rank adaptation (LoRA) and its mixture-of-experts (MOE) variants are highly effective parameter-efficient fine-tuning (PEFT) methods. However, they introduce significant latency in multi-tenant settings due to the LoRA modules (and MOE routers) added to multiple linear modules in the Transformer layer. To address this issue, we propose Efficient Mixture of Low-Rank Adaptation (EM-LoRA), a novel LoRA variant. EM-LoRA differs from previous MOE-style LoRA methods by considering each LoRA module as an expert and employing a prompt-aware routing mechanism. This mechanism calculates expert routing results once before generating the first new token and reuses these results for subsequent tokens, reducing latency. Extensive experiments and analysis on commonsense reasoning tasks, math reasoning tasks, and widely used LLM evaluation benchmarks demonstrate that EM-LoRA consistently outperforms strong PEFT baselines with comparable tunable parameter budgets. Additionally, EM-LoRA significantly reduces latency in multi-tenant settings compared to previous LoRA-based methods.\footnote{Codes and fine-tuned models will be open-sourced to facilitate future research. }
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: Parameter efficient fine-tuning, large language models, LoRA, mixture of experts
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 245
Loading