Labels are not necessary: Assessing peer-review helpfulness using domain adaptation based on self-trainingDownload PDF

16 Oct 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: A peer-assessment system allows students to provide feedback on each other’s work. An effective peer assessment system urgently re- quires helpful reviews to facilitate students to make improvements and progress. Automated evaluation of review helpfulness, with the help of deep learning models and natural language processing techniques, gains much interest in the field of peer assessment. However, collect- ing labeled data with the “helpfulness” tag to build these prediction models remains challeng- ing. A straightforward solution would be using a supervised learning algorithm to train a pre- diction model on a similar domain and apply it to our peer review domain for inference. But naïvely doing so can degrade the model perfor- mance in the presence of the distributional gap between domains. Such a distributional gap can be effectively addressed by Domain Adaptation (DA). Self-training has recently been shown as a powerful branch of DA to address the distribu- tional gap. The first goal of this study is to eval- uate the performance of self-training-based DA in predicting the helpfulness of peer reviews as well as the ability to overcome the distribu- tional gap. Our second goal is to propose an advanced self-training framework to overcome the weakness of the existing self-training by tailoring knowledge distillation and noise injec- tion, to further improve the model performance and better address the distributional gap.
0 Replies

Loading