Zero-Shot and Efficient Clarification Need Prediction in Conversational Search

Published: 2025, Last Modified: 12 Jan 2026ECIR (1) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Clarification need prediction (CNP) is a key task in conversational search, aiming to predict whether to ask a clarifying question or give an answer to the current user query. However, current research on CNP suffers from the issues of limited CNP training data and low efficiency. In this paper, we propose a zero-shot and efficient CNP framework (OUR), in which we first prompt LLMs in a zero-shot manner to generate two sets of synthetic queries: ambiguous and specific (unambiguous) queries. We then use the generated queries to train efficient CNP models. OUR eliminates the need for human-annotated clarification-need labels during training and avoids the use of LLMs with high query latency at query time. To further improve the generation quality of synthetic queries, we devise a topic-, information-need-, and query-aware CoT prompting strategy (PROMPT). Moreover, we enhance PROMPT with counterfactual query generation (SEQ), which guides LLMs first to generate a specific/ambiguous query and then sequentially generate its corresponding ambiguous/specific query. Experimental results show that OUR achieves superior CNP effectiveness and efficiency compared with zero- and few-shot LLM-based CNP predictors.
Loading