Development of Robust Automated Scoring Models Using Adversarial Input for Oral Proficiency AssessmentDownload PDFOpen Website

2019 (modified: 16 Dec 2021)INTERSPEECH 2019Readers: Everyone
Abstract: In this study, we developed an automated scoring model for an oral proficiency test eliciting spontaneous speech from non-native speakers of English. In a large-scale oral proficiency test, a small number of responses may have atypical characteristics that make it difficult even for state-of-the-art automated scoring models to assign fair scores. The oral proficiency test in this study consisted of questions asking about content in materials provided to the test takers, and the atypical responses frequently had serious content abnormalities. In order to develop an automated scoring system that is robust to these atypical responses, we first developed a set of content features to capture content abnormalities. Next, we trained scoring models using the augmented training dataset, including synthetic atypical responses. Compared to the baseline scoring model, the new model showed comparable performance in scoring normal responses, while it assigned fairer scores for authentic atypical responses extracted from operational test administrations.
0 Replies

Loading