LLM-based Multi-hop Question Answering with Knowledge Graph Integration in Evolving Environments

ACL ARR 2024 April Submission799 Authors

16 Apr 2024 (modified: 20 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The rapid obsolescence of information in Large Language Models (LLMs) has spurred the development of various techniques for incorporating new facts. To address the ripple effects of altering information, we introduce GMeLLo (Graph Memory-based Editing for Large Language Models), a straightforward yet highly effective method that harnesses the strengths of both LLMs and Knowledge Graphs (KGs). Instead of merely storing edited facts in isolated sentences within an external repository, we utilize established KGs as our foundation and dynamically update them as required. When faced with a query, we employ LLMs to derive an answer based on the relevant edited facts. Additionally, we translate each question into a formal query, tapping into the extensive data within the KG to obtain a more nuanced answer directly from it. In cases of conflicting answers, we prioritize the response derived from the KG as our final result. Our experiments demonstrate a substantial enhancement of GMeLLo over state-of-the-art (SOTA) methods on the MQuAKE benchmark—a dataset specifically designed for multi-hop question answering.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: LLM, Knowledge Graph, multi-hop question answering, knowledge editing
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 799
Loading