KnowLA: Enhancing Parameter-efficient Finetuning via Knowledgeable AdaptationDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: We propose KnowLA, a method leveraging knowledge graph embeddings to enhance parameter-efficient finetuning in large language models. It demonstrates effectiveness and robustness on multiple datasets and models.
Abstract: Parameter-efficient finetuning (PEFT) is a crucial technique to adapt large language models (LLMs) to downstream tasks. In this paper, we study using knowledge graph embeddings to improve the effectiveness of PEFT. We propose a knowledgeable adaptation method called KnowLA. It inserts an adaptation layer into a LLM to integrate the embeddings of entities that appear in the input text. The adaptation layer is trained in combination with LoRA on instruction data. Experiments with two popular LLMs and three knowledge graphs on six datasets demonstrate the effectiveness and robustness of KnowLA. We show that KnowLA can help activate the relevant parameterized knowledge in a LLM to answer a question without changing its parameters or input prompts.
Paper Type: long
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: Approaches low compute settings-efficiency
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview