Evidence-Free Claim Verification via Large Language Models

ICLR 2026 Conference Submission17713 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: hallucination detection, fact-checking, uncertainty quantification
TL;DR: Formalize a task for fact checking without evidence, comprehensive evaluation and top-performing method based on LLM internals
Abstract: Hallucination detection is essential for reliable LLMs. Most existing fact-checking systems retrieve external knowledge to verify hallucinations. While effective, these methods are computationally heavy, sensitive to retriever quality, and reveal little about an LLM inherent fact-checking ability. We propose an evidence-free claim verification task: identifying factual inaccuracies without external retrieval. To study this setting, we introduce a comprehensive evaluation framework covering 9 datasets and 16 methods, testing robustness to long-tail knowledge, claim source variation, multilinguality, and long-form generation. Our experiments show that traditional uncertainty quantification methods often lags behind detectors based on internal model representations. Building on this, we develop a probe-based approach that achieves state-of-the-art results. To sum up, our setting establishes a new path for hallucination research: enabling lightweight, scalable, and model-intrinsic detection that can facilitate broader fact-checking, provide reward signals for training, and be integrated into the generation process.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 17713
Loading