Howto Evaluate the Next System: Automatic Dialogue Evaluation from the Perspective of Continual LearningDownload PDF

21 Jul 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: Automatic dialogue evaluation plays a crucial role in open-domain dialogue research. Previ ous works train neural networks with limited annotation for conducting automatic dialogue evaluation, which would naturally affect the evaluation fairness as dialogue systems close to the scope of training corpus would have more preference than the other ones. In this paper, we study alleviating this problem from the perspective of continual learning: given an existing neural dialogue evaluator and the next system to be evaluated, we fine-tune the learned neural evaluator by selectively forget ting/updating its parameters, to jointly fit di alogue systems have been and will be eval uated. Our motivation is to seek for a life long and low-cost automatic evaluation for di alogue systems, rather than to reconstruct the evaluator over and over again. Experimen tal results show that our continual evaluator achieves comparable performance with recon structing new evaluators, while requires signif icantly lower resources.
0 Replies

Loading