Memory Conditioned Semantic Entropy for Multi-turn Dialogue Systems

ACL ARR 2026 January Submission9612 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: uncertainty estimation, hallucination detection, multi-turn dialogue, semantic entropy
Abstract: Large language models have been increasingly deployed in multi-turn dialogue settings, where hallucinations arise not from isolated errors, but from cross-turn inconsistency and semantic drift. Existing uncertainty estimation methods, typically operate at the single-turn level and ignore temporal dependencies introduced by dialogue history, limiting their ability to characterize cross-turn hallucinations. We introduce Memory-Conditioned Semantic Entropy (MC-SE), a framework that generalizes semantic entropy by incorporating an external dialogue memory into its uncertainty estimates. MC-SE combines multiple sampled responses through semantic clustering and penalizes outputs that conflict with earlier statements in the dialogue using natural language inference. In doing so, it produces uncertainty estimates that explicitly account for cross-turn consistency constraints. Using controlled experiments on synthetic multi-turn question-answering dialogues, we analyze how memory conditioning systematically reshapes uncertainty across dialogue stages and consistency regimes.Our results show that memory-aware uncertainty reveals cross-turn inconsistencies that remain invisible to turn-local measures, highlighting the importance of memory-aware analysis for understanding hallucination behavior in multi-turn dialogue systems.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: dialogue systems, uncertainty estimation, hallucination detection, evaluation, semantic analysis, multi-turn dialogue
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 9612
Loading