Building Open-Retrieval Conversational Question Answering Systems by Generating Synthetic Data and Decontextualizing User Questions
Abstract: We consider open-retrieval conversational question answering (OR-CONVQA), an extension of
question answering where system responses
need to be (i) aware of dialog history and
(ii) grounded in documents (or document fragments) retrieved per question. Domain-specific
OR-CONVQA training datasets are crucial for
real-world applications, but hard to obtain. We
propose a pipeline that capitalizes on the abundance of plain text documents in organizations
(e.g., product documentation) to automatically
produce realistic OR-CONVQA dialogs with
annotations. Similarly to real-world humanannotated OR-CONVQA datasets, we generate
in-dialog question-answer pairs, self-contained
(decontextualized, e.g., no referring expressions) versions of user questions, and propositions (sentences expressing prominent information from the documents) the system responses
are grounded in. We show how the synthetic
dialogs can be used to train efficient question
rewriters that decontextualize user questions,
allowing existing dialog-unaware retrievers to
be utilized. The retrieved information and the
decontextualized question are then passed on
to an LLM that generates the system’s response.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: Dialogue and Interactive Systems, Discourse and Pragmatics, Efficient/Low-Resource Methods for NLP, Generation, Information Retrieval and Text Mining, Question Answering
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Submission Number: 4489
Loading