Graph Memory-based Editing for Large Language ModelsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: The information within Large Language Models (LLMs) quickly becomes outdated, prompting the development of various techniques to perform knowledge editing with new facts. However, existing knowledge editing methods often overlook the interconnected nature of facts, failing to account for the ripple effects caused by changing one piece of information. In our study, we present GMeLLo (Graph Memory-based Editing for Large Language Models), a simple yet effective memory-based method that transitions the Multi-hop Question Answering for Knowledge Editing (MQuAKE) task into a Knowledge-based Question Answering (KBQA) framework. GMeLLo stores all relevant facts externally in a Knowledge Graph (KG) and directs the language model to engage in semantic parsing. This involves translating natural language questions into formal queries to extract information from the KG. Notably, our method eliminates the need to fine-tune LLMs, ensuring that edited facts do not corrupt other information. In our experimental findings, we noted a noteworthy enhancement of GMeLLo in comparison to state-of-the-art model editors on the MQuAKE benchmark—a dataset tailored for multi-hop question answering, particularly evident when editing multiple facts simultaneously.
Paper Type: long
Research Area: NLP Applications
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies

Loading