CQMrobust: A Chinese Dataset of Linguistically Perturbed Natural Questions for Evaluating the Robustness of Question Matching ModelsDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=976tbEd9fxU
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: In this paper, we focus on studying robustness evaluation of Chinese question matching. Most of the previous work on analyzing robustness issue focus on just one or a few types of artificial adversarial examples. Instead, we argue that it is necessary to formulate a comprehensive evaluation about the linguistic capabilities of models on natural texts. For this purpose, we create a Chinese dataset namely CQMrobust which contains natural questions with linguistic perturbations to evaluate the robustness of question matching models. CQMrobust contains 3 categories and 13 subcategories with 32 linguistic perturbations. The extensive experiments demonstrate that CQMrobust has a better ability to distinguish different models. Importantly, the detailed breakdown of evaluation by linguistic phenomenon in CQMrobust helps us easily diagnose the strength and weakness of different models. Additionally, our experiment results show that the effect of artificial adversarial examples does not work on the natural texts.The dataset and baseline codes will be publicly available in the open source community.
0 Replies

Loading