Towards Robust Neural Machine Reading Comprehension via Question ParaphrasesDownload PDFOpen Website

2019 (modified: 15 Nov 2021)IALP 2019Readers: Everyone
Abstract: In this paper, we focus on addressing the over-sensitivity issue of neural machine reading comprehension (MRC) models. By oversensitivity, we mean that the neural MRC models give different answers to question paraphrases that are semantically equivalent. To address this issue, we first create a large-scale Chinese MRC dataset with high-quality question paraphrases generated by a toolkit used in Baidu Search. Then, we quantitively analyze the oversensitivity issue of the neural MRC models on the dataset. Intuitively, if two questions are paraphrases of each other, a robust model should give the same predictions. Based on this intuition, we propose a regularized BERT-based model to encourage the model give the same predictions to similar inputs by leveraging high-quality question paraphrases. The experimental results show that our approaches can significantly improve the robustness of a strong BERT-based MRC model and achieve improvements over the BERT-based model in terms of held-out accuracy. Specifically, the different prediction ratio (DPR) for question paraphrases of the proposed model decreases more than 10%.
0 Replies

Loading