Keywords: Knowledge Editing, Multi-Hop Question-Answering, Large Language Models, Masked Language Modeling
TL;DR: We introduce a novel approach for decomposing multi-hop questions in knowledge editing scenarios by using masked language modeling to extract question hops.
Abstract: Large Language Models (LLMs) acquire vast amounts of knowledge during computationally expensive pre-training. Knowledge Editing has emerged as a lightweight bypass for updating factual information in LLMs. To handle multi-hop question-answering (MQA), knowledge editors rely on compositional reasoning to decompose multi-hop questions into their constituent subquestions. State-of-the-art knowledge editors perform question decomposition using large causal language models, which often introduce errors manifested as hallucinations. In this paper, we propose Question Decomposition using Masked Language Modeling for Editing Knowledge (QMEK), a knowledge editing framework for multi-hop question-answering. The framework consists of two key components: a question decomposition module and a subquestion answering module. Our approach is motivated by the insight that reformulating question decomposition as a masked language modeling task rather than a causal language modeling task reduces inference complexity and curbs hallucinations. Furthermore, we adopt a relational triple representation in both modules to eliminate errors that arise when performing translations between natural language and structured triple formats. We evaluate QMEK against 5 state-of-the-art frameworks on 3 datasets and achieve an average 17.5\% accuracy increase and 10.2x speedup.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 19286
Loading