Unlocking the Global Synergies in Low-Rank Adapters

Published: 21 Jun 2024, Last Modified: 26 Jul 2024ES-FoMo-II 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: parameter-efficient training, PET, low-rank adapter, LoRA
TL;DR: We propose a method of autonomously allocating the LoRA module across the entire LLM given a parameter budget
Abstract: Low-rank Adaption (LoRA) has been the de-facto parameter-efficient fine-tuning technique for large language models. We present HeteroLoRA, a lightweight search algorithm that leverages zero-cost proxies to allocate the limited LoRA trainable parameters across the model for better fine-tuned performance. In addition to the allocation for the standard LoRA-adapted models, we also demonstrate the efficacy of HeteroLoRA by performing the allocation in a more challenging search space that includes LoRA modules and LoRA-adapted shortcut connections. Experiments show that HeteroLoRA enables improvements in model performance given the same parameter budget. For example, on MRPC, we see an improvement of 1.6% in accuracy with similar training parameter budget. We have open-sourced our algorithm.
Submission Number: 51
Loading