GraphLLM: A General Framework for Multi-hop Question Answering over Knowledge Graphs Using Large Language Models

Published: 01 Jan 2024, Last Modified: 02 Mar 2025NLPCC (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The task of multi-hop question answering over knowledge graphs (KGQA) is designed to identify answer entities for a given question through reasoning across multiple edges over KGs. This task presents persistent challenges: as the number of hops increases, both the reasoning complexity and the pool of candidate answers expand, resulting in suboptimal outcomes. Due to the powerful semantic understanding and logical reasoning capabilities of large language models, we propose a general framework for multi-hop KQGA using large language models (LLMs), named GraphLLM. Specifically, GraphLLM involves employing the semantic understanding and reasoning abilities of LLMs to decompose multi-hop questions through a Divide-And-Conquer approach and construct sub-graphs, transforming complex problems into several simple sub-questions. We obtain the ultimate answer by iteratively using Graph Neural Networks (GNNs) to solve sub-questions. By conducting experiments on benchmarks WebQSP and MetaQA, results indicate that GraphLLM exhibits outstanding performance compared to leading methods. We successfully demonstrate a collaborative example of LLMs and GNNs, offering a novel approach to addressing intricate multi-hop KGQA.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview