Abstract: Cloze test is widely adopted in language exams to evaluate students' language proficiency. In this paper, we propose the first large-scale human-designed cloze test dataset CLOTH in which the questions were used in middle-school and high-school language exams. With the missing blanks carefully created by teachers and candidate choices purposely designed to be confusing, CLOTH requires a deeper language understanding and a wider attention span than previous automatically generated cloze datasets. We show humans outperform dedicated designed baseline models by a significant margin, even when the model is trained on sufficiently large external data. We investigate the source of the performance gap, trace model deficiencies to some distinct properties of CLOTH, and identify the limited ability of comprehending a long-term context to be the key bottleneck. In addition, we find that human-designed data leads to a larger gap between the model's performance and human performance when compared to automatically generated data.
TL;DR: A cloze test dataset designed by teachers to assess language proficiency
Keywords: dataset, human-designed, language understanding
Data: [Billion Word Benchmark](https://paperswithcode.com/dataset/billion-word-benchmark), [CBT](https://paperswithcode.com/dataset/cbt)
8 Replies
Loading