Automatic Feedback Generation for Dialog-Based Language Tutors Using Transformer Models and Active Learning
Abstract: We aim to provide non-native English learners with natural language feedback on the pragmatic appropriateness of their dialogic speech via a human-in-the-loop feedback generation model. We fine-tune a large, pre-trained transformer model on a small hand-crafted dataset of feedback paraphrases formulated from a scoring rubric. We then utilize an active learning pipeline with expert annotators to correct the model’s feedback. We find that human-rated quality and unigram diversity of generated feedback increases over time, indicating that the model improves and produces more diverse responses over each successive active learning iteration. Our results indicate the potential for active learning to improve targeted feedback generation at scale for language learning learners.
0 Replies
Loading