Analyzing Human Questioning Behavior and Causal Curiosity through Natural Queries

Published: 10 Oct 2024, Last Modified: 05 Dec 2024CaLM @NeurIPS 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, dataset, causal questions, chatbots, AI agents, natural language processing, causality
TL;DR: We propose a dataset of natural questions, causal and non-causal, useful to study human questioning behaviour and curiosity.
Abstract: The recent development of Large Language Models (LLMs) has changed our role in interacting with them. Instead of primarily testing these models with questions we already know the answers to, we now use them to explore questions where the answers are unknown to us. This shift, which hasn't been fully addressed in existing datasets, highlights the growing need to understand naturally occurring human questions—those that are more complex, open-ended, and reflective of real-world needs. To this end, we present NatQuest, a collection of 13,500 naturally occurring questions from three diverse sources: human-to-search-engine queries, human-to-human interactions, and human-to-LLM conversations. Our comprehensive collection enables a rich understanding of human curiosity across various domains and contexts. Our analysis reveals a significant presence of causal questions (up to 42%) within the dataset, for which we develop an iterative prompt improvement framework to identify all causal queries and examine their unique linguistic properties, cognitive complexity, and source distribution. We also lay the groundwork to explore LLM as a router for these questions and provide six efficient classification models to identify causal questions at scale for future work.
Submission Number: 2
Loading