Impact of Distractors in Multiple Choice Reading ComprehensionDownload PDF

Anonymous

03 Sept 2022 (modified: 05 May 2023)ACL ARR 2022 September Blind SubmissionReaders: Everyone
Abstract: Multiple choice reading comprehension (MCRC) exams are a standard way of assessing candidates for a wide range of language examinations. Despite the ubiquity of these exams, it is difficult to make good questions where: 1) all distractor options remain feasible when only given the question; and 2) there is only one clear solution when both the context passage and question are given. This work examines some standard MCRC datasets with these attributes in mind. We show that even when no contextual passage information is available, deep learning systems can achieve surprisingly high accuracy. This performance is shown to be a result of the choice of distractors, as for some questions the system can use "general knowledge" (from pre-trained models) to answer the question. Furthermore, this work proposes simple metrics that can flag questions where answers can be eliminated without context, as well as highlighting questions where significant information is extracted from the context. These metrics could aid test designers to develop and refine MCRC exam questions that better assess reading comprehension abilities.
Paper Type: short
0 Replies

Loading