Abstract: Knowledge Graphs (KGs) are valuable for representing relationships between entities in a structured format. Traditionally, these knowledge bases are queried to extract specific information. However, question-answering (QA) over KGs poses a challenge due to the intrinsic complexity of natural language compared to the structured format and the vast size of these graphs. Despite these challenges, the structured nature of KGs offer a robust foundation for grounding the outputs of Large Language Models (LLMs), enhancing reliability and control for organizations.
In this work, we introduce a novel integration of reasoning strategies with KGs, anchoring each step or "thought" of the reasoning chains in KG data. This approach uses recent advancements in LLMs, applying reasoning methods during inference to improve performance and capabilities. We evaluate both agentic and automated search methods across several reasoning strategies, including Chain-of-Thought (CoT), Tree-of-Thought (ToT), and Graph-of-Thought (GoT), using GRBench, a benchmark dataset for graph reasoning with domain-specific graphs. Our experiments demonstrate that this innovative approach achieves a significant performance improvement of at least 26.5% over baseline models, highlighting the benefits of grounding LLM reasoning processes in structured KG data.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: Large Language Models, Question-Answering, Reasoning, Knowledge Graphs, Structured data
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 4191
Loading