GraphEval: A Knowledge-Graph Based LLM Hallucination Evaluation Framework

Published: 29 Jun 2024, Last Modified: 05 Jul 2024KiL 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Hallucination Detection, Knowledge Graphs, Hallucination Correction
TL;DR: We present GraphEval: a hallucination evaluation framework based on representing information in Knowledge Graph (KG) structures.
Abstract: Methods to evaluate Large Language Model (LLM) responses and detect inconsistencies, also known as hallucinations, with respect to the provided knowledge, are becoming increasingly important for LLM applications. Current metrics fall short in their ability to provide explainable decisions, systematically check all pieces of information in the response, and are often too computationally expensive to be used in practice. We present GraphEval: a hallucination evaluation framework based on representing information in Knowledge Graph (KG) structures. Our method identifies the specific triples in the KG that are prone to hallucinations and hence provides more insight into where in the response a hallucination has occurred, if at all, than previous methods. Furthermore, using our approach in conjunction with state-of-the-art natural language inference (NLI) models leads to an improvement in balanced accuracy on various hallucination benchmarks, compared to using the raw NLI models. Lastly, we explore the use of GraphEval for hallucination correction by leveraging the structure of the KG, a method we name GraphCorrect, and demonstrate that the majority of hallucinations can indeed be rectified.
Submission Number: 5
Loading