QAConv: Question Answering on Informative ConversationsDownload PDF

28 May 2021 (modified: 22 Oct 2023)Submitted to NeurIPS 2021 Datasets and Benchmarks Track (Round 1)Readers: Everyone
Keywords: question answering, conversational AI, dataset
TL;DR: A new dataset of QA on informative conversations such as business emails, panel discussions, and work channels.
Abstract: This paper introduces QAConv, a new question answering (QA) dataset that uses conversations as a knowledge source. We focus on informative conversations, including business emails, panel discussions, and work channels. Unlike open-domain and task-oriented dialogues, these conversations are usually long, complex, asynchronous, and involve strong domain knowledge. In total, we collect 34,204 QA pairs, including multi-span and unanswerable questions, from 10,259 selected conversations with both human-written and machine-generated questions. We segment long conversations into chunks and use a question generator and a dialogue summarizer as auxiliary tools to collect multi-hop questions. The dataset has two testing scenarios, chunk mode and full mode, depending on whether the grounded chunk is provided or retrieved from a large pool of conversations. Experimental results show that state-of-the-art pretrained QA systems have limited zero-shot ability and tend to predict our questions as unanswerable. Finetuning such systems on our corpus can significantly improve up to 23.6\% and 13.6\% in both chunk mode and full mode, respectively.
Supplementary Material: zip
URL: https://github.com/salesforce/QAConv
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2105.06912/code)
14 Replies

Loading