monoQA: Multi-Task Learning of Reranking and Answer Extraction for Open-Retrieval Conversational Question Answering
Abstract: To address the Conversational Question Answering (ORConvQA) task, previous work has
considered an effective three-stage architecture, consisting of a retriever, a reranker, and
a reader to extract the answers. In order to effectively answer the users’ questions, a number
of existing approaches have applied multi-task
learning, such that the same model is shared
between the reranker and the reader. Such approaches also typically tackle reranking and
reading as classification tasks. On the other
hand, recent text generation models, such as
monoT5 and UnifiedQA, have been shown to
respectively yield impressive performances in
passage reranking and reading. However, no
prior work has combined monoT5 and UnifiedQA to share a single text generation model
that directly extracts the answers for the users
instead of predicting the start/end positions in
a retrieved passage. In this paper, we investigate the use of Multi-Task Learning (MTL) to
improve performance on the ORConvQA task
by sharing the reranker and reader’s learned
structure in a generative model. In particular,
we propose monoQA, which uses a text generation model with multi-task learning for both the
reranker and reader. Our model, which is based
on the T5 text generation model, is fine-tuned
simultaneously for both reranking (in order to
improve the precision of the top retrieved passages) and extracting the answer. Our results on
the OR-QuAC and OR-CoQA datasets demonstrate the effectiveness of our proposed model,
which significantly outperforms existing strong
baselines with improvements ranging from
+12.31% to +19.51% in MAP and from +5.70%
to +23.34% in F1 on all used test sets.
0 Replies
Loading