Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agentsDownload PDF

Anonymous

04 Mar 2022 (modified: 05 May 2023)NLP for ConvAIReaders: Everyone
TL;DR: Human dialogue evaluation is still an open problem (just like auto evaluation)! Different methods are preferable in different conditions, with no overall winner.
Abstract: At the heart of improving conversational AI is the open problem of how to evaluate conversations. Issues with automatic metrics are well known (Liu et al., 2016), with human evaluations still considered the gold standard. Unfortunately, how to perform human evaluations is also an open problem: differing data collection methods have varying levels of human agreement and statistical sensitivity, resulting in differing amounts of human annotation hours and labor costs. In this work we compare five different crowdworker-based human evaluation methods and find that different methods are best depending on the types of models compared, with no clear winner across the board. While this highlights the open problems in the area, our analysis leads to advice of when to use which one, and possible future directions.
0 Replies

Loading