Quantifying Hallucination of Large Language Models via Simple Memory Consistency Test

XJTU 2024 CSUC Submission13 Authors

31 Mar 2024 (modified: 03 Apr 2024)XJTU 2024 CSUC SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language model, hallucination, memory consistency, memory robustness.
Abstract: The emergent abilities of large language models (LLMs) give rise to an intriguing phenomenon: their erroneous generation behaviors have become increasingly subtle. Ultimately, these distinguished behaviors are referred to as hallucination and have attracted much dedicated research. In this study, we investigate LLM hallucination through the lens of memory consistency and divide it into two categories: internal hallucination and external hallucination. This viewpoint provides a valuable framework for future research into the development of quantitative methods for evaluating and demystifying LLM hallucination. Within this framework, we introduce two simple yet effective evaluation methods for both types of hallucination and apply them to three prevalent LLMs. For external hallucination, we assess a LLM's ability to generate consistent responses across various transformations of a single query, as well as the relevance of those responses to the original query. Regarding internal hallucination, we measure a LLM's accuracy in associating simple knowledge pairs, thereby evaluating the robustness of its internal memory. We observe that the performance of all LLMs deteriorates as the number of knowledge pairs increases, even though these models have well acquired each individual knowledge.
Submission Number: 13