Machine Reading Comprehension with Enhanced Linguistic VerifiersDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: machine reading comprehension, BERT, linguistic verifiers, hierarchical attention networks
Abstract: We propose two linguistic verifiers for span-extraction style machine reading comprehension to respectively tackle two challenges: how to evaluate the syntactic completeness of predicted answers and how to utilize the rich context of long documents. Our first verifier rewrites a question through replacing its interrogatives by the predicted answer phrases and then builds a cross-attention scorer between the rewritten question and the segment, so that the answer candidates are scored in a \emph{position-sensitive} context. Our second verifier builds a hierarchical attention network to represent segments in a passage where neighbour segments in long passages are \emph{recurrently connected} and can contribute to current segment-question pair's inference for answerablility classification and boundary determination. We then combine these two verifiers together into a pipeline and apply it to SQuAD2.0, NewsQA and TriviaQA benchmark sets. Our pipeline achieves significantly better improvements of both exact matching and F1 scores than state-of-the-art baselines.
One-sentence Summary: Two novel linguistic verifiers for answerable questions in machine reading comprehension, one to judge the linguistic correctness of answer phrases and the other to enrich long paragraph contexts by hierarchical attentions.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=-vszTc057
9 Replies

Loading