Detecting Hallucination Before Answering: Semantic Compression Through Instruction

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: hallucination, hallucination detection, feeling of knowing, LLM, large language model
TL;DR: Determining whether the LLM possesses a knowledge before generating an answer.
Abstract: Large language models (LLMs) excel in various tasks but often suffer from hallucinations, providing incorrect information with high confidence. To address this, we focus on detecting when an LLM knows or does not know an answer, a concept referred to as the ``feeling of knowing'' (FoK). We propose a novel approach called Semantic Compression by trying to Answer in One-word (SCAO), which enables efficient FoK detection before generating full sentences, with only minimal computational cost. Additionally, we introduce a method to measure confounding variable effects in benchmarks, an Approximate Misannotation Effect (AME) test. Our experiments demonstrate that the feature fusion model of our SCAO and probing achieves enhanced performance in FoK detection in both short and long-form entity questions. The code and the dataset is available online (https://anonymous.4open.science/r/SCAO-2FF8).
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8793
Loading