Abstract: Conversational search is an emerging paradigm of information retrieval, enabling users to engage in dynamic, interactive dialogues that more closely mimic natural human communication and have the power to address complex, multi-turn queries. To this end, under the mixed-initiative paradigm, conversational search system can ask clarifying questions to the user, which helps close the gap between user’s query and their underlying information need and thus improve the quality of the entire search experience. However, finding appropriate clarifying questions is not straightforward. To this end, in this work, we approach the problem of finding relevant clarifying questions by exploiting the “People Also Ask” (PAA) feature of a popular search engine. We perform a qualitative assessment to verify the quality of the extracted questions and their potential applicability to clarification in search. Next, we convert the PAA questions into clarifying questions using various transformer-based models, such as T5, BART, GPT2, and use established natural language generation metrics to evaluate the performance of different LLMs for paraphrasing the questions. Finally, we discuss the results and the relation between PAA questions and clarifying questions to draw useful conclusions and directions of future work.
Loading