Mitigating Hallucination by Integrating Knowledge Graphs into LLM Inference -- a Systematic Literature Review

Published: 22 Jun 2025, Last Modified: 22 Jun 2025ACL-SRW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs, hallucination, knowledge graphs, inference, literature review
TL;DR: We conduct a systematic literature review of recent works with the goal of reducing hallucination in LLMs by integrating knowledge graphs as factual knowledge sources in the inference phase of LLMs
Abstract: Large Language Models (LLMs) demonstrate strong performance on different language tasks, but tend to hallucinate -- generate plausible but factually incorrect outputs. Recently, several approaches to integrate Knowledge Graphs (KGs) into LLM inference were published to reduce hallucinations. This paper presents a systematic literature review (SLR) of such approaches. Following established SLR methodology, we identified relevant work by systematically search in different academic online libraries and applying a selection process. Nine publications were chosen for in-depth analysis. Our synthesis reveals differences and similarities of how the KG is accessed, traversed, and how the context is finally assembled. KG integration can significantly improve LLM performance on benchmark datasets and additionally to mitigate hallucination enhance reasoning capabilities, explainability, and access to domain-specific knowledge. We also point out current limitations and outline directions for future work.
Archival Status: Archival
Paper Length: Long Paper (up to 8 pages of content)
Submission Number: 180
Loading