I Could’ve Asked That: Reformulating Unanswerable Questions

ACL ARR 2024 April Submission503 Authors

16 Apr 2024 (modified: 09 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: When seeking information from unfamiliar documents, users frequently pose questions that cannot be answered by the documents. While existing large language models (LLMs) identify these unanswerable questions, they do not assist users in reformulating their questions, thereby reducing their overall utility. We curate CouldAsk, a benchmark composed of existing and new datasets for document-grounded question answering, specifically designed to study reformulating unanswerable questions. We evaluate state-of-the-art open-source and proprietary LLMs on CouldAsk. The results demonstrate the limited capabilities of these models in reformulating questions. Specifically, GPT-4 and Llama2-7B successfully reformulate questions only 26\% and 12\% of the time, respectively. Error analysis shows that 62\% of the unsuccessful reformulations stem from the models merely rephrasing the questions or even generating identical questions.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: question answering, evaluation
Contribution Types: NLP engineering experiment, Data resources, Data analysis
Languages Studied: English
Section 2 Permission To Publish Peer Reviewers Content Agreement: Authors decline to grant permission for ACL to publish peer reviewers' content
Submission Number: 503
Loading