LLM Hallucination Reasoning with Zero-shot Knowledge Test

Published: 09 Oct 2024, Last Modified: 04 Dec 2024SoLaR PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Technical
Keywords: LLM, Hallucination, Hallucination Reasoning
Abstract: LLM hallucination, where LLMs occasionally generate unfaithful text, poses significant challenges for practical applications of LLMs. Most existing detection methods require external knowledge, LLM fine-tuning, or hallucination-labeled datasets and do not distinguish between different hallucination types, which are crucial for improving detection performance. We introduce a new task, Hallucination Reasoning, which classifies LLM-generated text into one of aligned, misaligned, and fabricated. Our novel source-free zero-shot method identifies whether LLM has enough knowledge about a prompt and text. Our experiments on new datasets demonstrate the effectiveness of our method in hallucination reasoning and underscore its importance for enhancing detection performance.
Submission Number: 51
Loading