Abstract: Large Language Model (LLM) has demonstrated notable success across a wide range of natural language processing tasks. Despite their success, LLM has limitations due to its reliance on static training data, leading to outdated knowledge and hallucinations. These issues impair reasoning reliability and make it difficult for LLMs to handle complex, multi-hop question answering (QA) tasks, reducing their ability to provide accurate, verifiable responses. To mitigate these issues, recent research has increasingly explored integrating external structured knowledge, such as Knowledge Graph (KG), into the reasoning process of LLMs. While effective, many existing frameworks exhibit tight coupling between the LLM and KG components, hindering adaptability. In this paper, we present FlexKG, a flexible framework for enhanced reasoning over Knowledge Graph with Large Language Model. FlexKG decomposes complex questions into sub-questions, efficiently retrieves relevant KG triples through iterative filtering, and aggregates the evidence to support accurate reasoning. The framework is plug-and-play, interoperable with diverse LLMs and KGs without requiring retraining. Extensive experiments on multiple KGQA baselines demonstrate that FlexKG consistently outperforms prior semantic parsing (SP)-based, information retrieval (IR)-based, LLM-only, and KG-enhanced LLM methods (by 99.9% Hits@1 on MetaQA and 79.7% Hits@1 on WebQSP), and improves the vanilla LLM performance (197.1% ChatGPT improvement on MetaQA), achieving state-of-the-art performance. In addition, the results of the ablation study confirm that each component of the FlexKG framework is necessary.
External IDs:dblp:conf/icic/FuDWZPYZW25
Loading