Abstract: The advancement of Large Language Models (LLMs) benefit from fact-checking to mitigate hallucination and parameter-efficient techniques such as Low-rank adaptations (LoRA) to overcome enormous computational overhead. While some studies have explored the parallel integration of multiple LoRAs, these approaches need attention to the connections between them. This paper investigates methods to establish connections among multiple LoRAs inspired by the information processing behavior of the human brain. We create three reasoning datasets tailored to fact-checking and fine-tune individual LoRAs, allowing them to view and reason from diverse perspectives. Then, we explore strategies for allocating these reasoning LoRAs and introduce LoraMap, an approach to map connections between them. The results on the fact-checking task demonstrate the superior performance of LoraMap over LoraHub, an existing LoRA composition method. LoraMap also achieves higher performance with significantly fewer parameters than LoraConcat, which concatenates LoRAs and further fine-tunes them.
Paper Type: short
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency
Languages Studied: English
0 Replies
Loading