What is a good question? Task-oriented asking with fact-level masking

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Question Generation, HCI, Information Retrieval, NLP, Language Model, Dataset
TL;DR: We found LLMs are bad at asking useful questions, so we created a way to train and evaluate question generation models without annotating data.
Abstract: Asking questions is an important element of real-life collaboration on reason- ing tasks like question answering. For example, a legal assistant chatbot may be unable to make accurate recommendations without specific information on the user’s circumstances. However, large language models are usually deployed to solve reasoning tasks directly without asking follow-up questions to the user or third parties. We term this problem task-oriented asking (TOA). Zero-shot chat models can perform TOA, but their training is primarily based on next- token prediction rather than whether questions contribute to successful col- laboration. To enable the training and evaluation of TOA models, we present a definition and framework for natural language task-oriented asking, the prob- lem of generating questions that result in answers useful for a reasoning task. We also present fact-level masking (FLM), a procedure for converting natural language datasets into self-supervised TOA datasets by omitting particular crit- ical facts. Finally, we generate a TOA dataset from the HotpotQA dataset using FLM and evaluate several zero-shot language models on it. Our experiments show that current zero-shot models struggle to ask questions that retrieve use- ful information, as compared to human annotators. These results demonstrate an opportunity to use FLM datasets and the TOA framework to train and evalu- ate better TOA models.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3729
Loading