VeriTrail: Closed-Domain Hallucination Detection with Traceability

ICLR 2026 Conference Submission20678 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: hallucination detection, faithfulness, fact-checking, traceability, provenance, error localization
TL;DR: We introduce VeriTrail, the first closed-domain hallucination detection method designed to provide traceability for processes with any number of generative steps, and demonstrate that it outperforms baseline methods in hallucination detection.
Abstract: Even when instructed to adhere to source material, language models often generate unsubstantiated content – a phenomenon known as “closed-domain hallucination.” This risk is amplified in processes with multiple generative steps (MGS), compared to processes with a single generative step (SGS). However, due to the greater complexity of MGS processes, we argue that detecting hallucinations in their final outputs is necessary but not sufficient: it is equally important to trace where hallucinated content was likely introduced and how faithful content may have been derived from the source through intermediate outputs. To address this need, we present VeriTrail, the first closed-domain hallucination detection method designed to provide traceability for both MGS and SGS processes. We also introduce the first datasets to include all intermediate outputs as well as human annotations of final outputs’ faithfulness for their respective MGS processes. We demonstrate that VeriTrail outperforms baseline methods on both datasets.
Primary Area: interpretability and explainable AI
Submission Number: 20678
Loading