PreWoMe: Exploiting Presuppositions as Working Memory for Long Form Question Answering

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Question Answering
Submission Track 2: Natural Language Generation
Keywords: Long-Form QA, Large Language Models, Presuppositions
TL;DR: This paper proposes a unified QA system that handles long-form information-seeking questions by extracting presuppositions and exploiting them as working memory.
Abstract: Information-seeking questions in long-form question answering (LFQA) often prove misleading due to ambiguity or false presupposition in the question. While many existing approaches handle misleading questions, they are tailored to limited questions, which are insufficient in a real-world setting with unpredictable input characteristics. In this work, we propose PreWoMe, a unified approach capable of handling any type of information-seeking question. The key idea of PreWoMe involves extracting presuppositions in the question and exploiting them as working memory to generate feedback and action about the question. Our experiment shows that PreWoMe is effective not only in tackling misleading questions but also in handling normal ones, thereby demonstrating the effectiveness of leveraging presuppositions, feedback, and action for real-world QA settings.
Submission Number: 3633
Loading