Characterization and Detection of Incompleteness and Ambiguity in Multi-Turn Interactions with LLMs

Published: 06 Oct 2025, Last Modified: 04 Nov 2025MTI-LLM @ NeurIPS 2025 PosterEveryoneRevisionsBibTeXCC BY-ND 4.0
Keywords: Multi-Turn Interactions, Question Answering Systems, LLM, Conversational AI
TL;DR: Characterization and Detection of Incompleteness and Ambiguity in Multi-Turn Interactions with LLMs
Abstract: Natural language interaction with computers has been transformed by Large Language Models (LLMs), which now serve as modern-day oracles capable of answering a wide range of queries. Unlike the single-turn interaction with the Delphic oracle, LLMs support multi-turn dialogues where additional context can improve responses. This paper focuses on identifying incompleteness and ambiguity in user queries during multi-turn interactions with an LLM. Using a simple tagged message exchange model between senders and receivers, we define these properties based on the dialogue sequence. While these definitions help categorize datasets, they cannot be used directly detect incompleteness or ambiguity. To bridge this gap, we explore the use of Embedding- and Text-based models as detectors. Our experiments on benchmark datasets show that: (a) answer correctness correlates strongly with the presence of incompleteness or ambiguity; (b) we can expect datasets with a high proportion of such questions to have longer multi-turn interactions; (c) effective detectors can be built using only the question and its context. These findings suggest that our proposed approach offers a useful mechanism for characterising datasets and that trained detectors can be used to automatically identify queries that need to be reformulated before presenting to an LLM.
Submission Number: 62
Loading