Instance-Level Dynamic LoRAs Composition for Cross-Task Generalization

ACL ARR 2024 June Submission2760 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models perform well on tasks that have undergone fine-tuning of instructions, but their performance to completely unseen tasks is often less than ideal. To overcome the challenge of cross-task generalization, task-level LoRA combination is proposed, which does not require training a model for new tasks. Instead, it learns the LoRA combination weights based on a small number of samples to form the task model. However, task-level LoRA combination only utilize a few task modules due to its reliance on the weight enumeration method, and it also overlooks the specificity between different instances. Therefore, we proposed an instance-level LoRA composition for cross-task generalization, which selects appropriate multiple task LoRAs for each input instance and dynamically determines the composition weights. Our experiments on publicly available datasets show that our method outperforms the typical method, LoraHub, in 16 out of 27 tasks. We release the source code at https://github.com/noname822/iLoraComp.git.
Paper Type: Short
Research Area: Question Answering
Research Area Keywords: Generation, Information Retrieval and Text Mining, Question Answering
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2760
Loading