Abstract: Large language models (LLMs) are widely used, but they often generate subtle factual errors, especially in long-form text. These errors are fatal in some specialized domains such as medicine.
Existing fact-checking with grounding documents methods face two main challenges: (1) they struggle to understand complex multihop relations in long documents, often overlooking subtle factual errors; (2) most specialized methods rely on pairwise comparisons, requiring multiple model calls, leading to high resource and computational costs.
To address these challenges, we propose $\textbf{\textit{GraphCheck}}$, a fact-checking framework that uses extracted knowledge graphs to enhance text representation.
Graph Neural Networks further process these graphs as a soft prompt, enabling LLMs to incorporate structured knowledge more effectively.
Enhanced with graph-based reasoning, GraphCheck captures multihop reasoning chains which are often overlooked by existing methods, enabling precise and efficient fact-checking in a single inference call.
Experimental results on seven benchmarks spanning both general and medical domains demonstrate a 6.1\% overall improvement over baseline models.
Notably, GraphCheck outperforms existing specialized fact-checkers and achieves comparable performance with state-of-the-art LLMs, such as DeepSeek-V3 and OpenAI-o1, with significantly fewer parameters.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Fact checking, Knowledge Graphs, LLM, Long document
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study, Data analysis
Languages Studied: English
Submission Number: 3650
Loading