Machine Reading Comprehension: Generative or Extractive Reader?Download PDF

Anonymous

17 Sept 2021 (modified: 05 May 2023)ACL ARR 2021 September Blind SubmissionReaders: Everyone
Abstract: While both extractive and generative readers have been successfully applied to the Question Answering (QA) task, little attention has been paid toward the comparison of these two readers. Which reader performs better? What are the reasons for the performance differences? In this paper, we aim to answer these questions in the setting of extractive QA tasks. We design multiple transformer-based models and different scenarios to systematically compare these two readers. Our findings characterize the difference of two readers and their pros and cons, which can instruct the optimal selection of the two readers, and open up new research avenues to improve each reader.Our major findings are:1) generative readers perform better when the input context is long, whereas extractive readers are better when the context is short;2) extractive readers generalize better as compared to the generative ones under out-of-domain settings, in both single- and multi-task learning scenarios. Our experiments also suggest that, although an encoder-only pre-trained language model (PrLM) is an intuitive choice for extractive readers, the encoder from encoder-decoder PrLM is a strong alternative that performs competitively.
0 Replies

Loading