Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
TL;DR: We propose a novel framework called graph-constrained reasoning that eliminates hallucinations and ensures faithful reasoning by integrating the KG structure into the LLM decoding process.
Abstract: Large language models (LLMs) have demonstrated impressive reasoning abilities, but they still struggle with faithful reasoning due to knowledge gaps and hallucinations. To address these issues, knowledge graphs (KGs) have been utilized to enhance LLM reasoning through their structured knowledge. However, existing KG-enhanced methods, either retrieval-based or agent-based, encounter difficulties in accurately retrieving knowledge and efficiently traversing KGs at scale. In this work, we introduce graph-constrained reasoning (GCR), a novel framework that bridges structured knowledge in KGs with unstructured reasoning in LLMs. To eliminate hallucinations, GCR ensures faithful KG-grounded reasoning by integrating KG structure into the LLM decoding process through KG-Trie, a trie-based index that encodes KG reasoning paths. KG-Trie constrains the decoding process, allowing LLMs to directly reason on graphs and generate faithful reasoning paths grounded in KGs. Additionally, GCR leverages a lightweight KG-specialized LLM for graph-constrained reasoning alongside a powerful general LLM for inductive reasoning over multiple reasoning paths, resulting in accurate reasoning with zero reasoning hallucination. Extensive experiments on several KGQA benchmarks demonstrate that GCR achieves state-of-the-art performance and exhibits strong zero-shot generalizability to unseen KGs without additional training.
Lay Summary: Large language models (LLMs), like ChatGPT, are good at solving complex problems with reasoning. However, they sometimes “hallucinate” — meaning they make up facts — especially when they don’t have the right knowledge at hand. To help fix this, researchers have tried connecting these models with knowledge graphs — structured databases that store facts as networks of concepts and their relationships. But most existing approaches either struggle to find the right facts or get stuck navigating these complex graphs efficiently. In this work, we propose a new method called graph-constrained reasoning (GCR) that tightly integrates LLMs with knowledge graphs. We use a special technique called KG-Trie to guide the LLM so it only follows valid paths in the knowledge graph when answering a question. This prevents the model from making things up and helps it reason more accurately. Our system combines two models — one specialized in graph reasoning and another in general language — to achieve state-of-the-art performance, even on unfamiliar domains, without extra training. This brings us closer to trustworthy AI.
Link To Code: https://github.com/RManLuo/graph-constrained-reasoning
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Model, Knowledge Graph
Submission Number: 8091
Loading